Preprint
Review

This version is not peer-reviewed.

A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research

A peer-reviewed article of this preprint also exists.

Submitted:

16 December 2024

Posted:

17 December 2024

You are already at the latest version

Abstract
Early detection of skin cancer is crucial for successful treatment. Various methods have been developed for this purpose, including traditional machine learning, deep learning, and hybrid approaches. This study aims to provide an overview and highlight developments in the use of deep learning for early skin cancer diagnosis. This study searches for publications in the Scopus database from 2019 to 2024. The search string is used to find articles by their abstracts, titles, and keywords. The search string includes several public datasets used for experiments, like HAM and ISIC. This ensures the papers found are relevant. Some filters are applied based on the year, document type, source type, and language. The analysis found 1,697 articles. The most common were journals and conference proceedings. Affiliations are predominantly from the department of dermatology and the faculty of computer science. Beyond the statistical discussions, this paper also highlights the ten most cited references and reviews specific bibliometric studies related to early skin cancer diagnosis. Bibliometric analysis provides a systematic method for identifying relevant research studies. Advanced software like VOSviewer and Bibliometrix help to assist this study. Given the growth in the past five years, interest in deep learning for skin cancer detection should rise.
Keywords: 
;  ;  ;  ;  

1. Introduction

Skin cancer is one of the most common types of cancer worldwide, with the incidence rate continuing to increase from year to year. With its high prevalence, early detection of skin cancer is vital. It improves patients’ prognosis and recovery rates. One method that is being developed to support this early detection is using deep learning technology. Deep learning is a branch of machine learning. It can process and analyze data on a large scale using deep artificial intelligent (AI). This technique has been successfully applied in various fields, including visual pattern recognition.
However, although there have been significant developments in the field of image recognition using deep learning, skin cancer detection is still a complex challenge. This challenge is primarily related to the wide variation in appearance of skin lesions as well as the ability to differentiate between benign and malignant lesions. Therefore, research on skin cancer detection using deep learning is very important. This method will improve accuracy and efficiency in detection. It will speed up the diagnosis and treatment of skin cancer patients.
In the past 5 years, several literature reviews have been published on skin cancer detection from different perspectives. There is much interest in using AI to diagnose skin cancer. Many studies have explored its potential to improve early detection and treatment. Brunna [1] highlights efforts to develop, test, and validate AI systems for detecting, diagnosing, and classifying skin cancer in clinical settings. She underlines the potential of AI to transform early diagnosis.
Yet, Brancaccio et al. [2] notes key limits in using AI for diagnosis. Many studies are of low quality, and there is a risk of missing melanomas. This underlines the need for rigorous validation and testing of AI models before they are widely adopted in clinical practice. Wei et al. [3] discusses the need to improve AI in clinical settings. It highlights human factors, privacy concerns, and the need for advanced learning methods. These include multimodal, incremental, and federated learning. These considerations are essential for ensuring that AI systems can be effectively integrated into clinical workflows and trusted by healthcare professionals.
Furthermore, Celebi [4] addresses the scarcity of studies on content-based image retrieval (CBIR) systems in dermatology, particularly the importance of integrating clinical metadata alongside visual features. This integration is vital for improving AI model accuracy. It is shown by the ISIC Archive’s work to advance dermoscopy image analysis. In parallel, Stafford et al. [5] highlights the promise of deep learning (DL) in diagnosing non-melanoma skin cancer (NMSC). DL can reach specialist-level accuracy in sensitivity and specificity. It also notes challenges. Image perturbations can affect diagnostic accuracy. This is a concern for AI in smartphone-based diagnostics. They lack the sophisticated imaging of dermatoscopes.
Debelee’s review [6] highlights the need for high-quality image datasets and standardized reporting in skin lesion analysis. The use of datasets like HAM10000 and ISIC has boosted research in this area. But, future work must make AI systems better at classifying, segmenting, and detecting skin diseases. They must be more precise and robust. Chu [7] further explores the increasing use of deep learning in skin cancer diagnosis, showcasing the effectiveness of hybrid models that combine machine learning and deep learning techniques. Despite promising results, many AI smartphone apps raise concerns. They often lack strong validation, so their accuracy is in doubt. Choi [8] provides a comprehensive analysis of DL algorithms used for diagnosing a range of skin conditions, pointing out that while the median diagnostic accuracy is generally high, the risk of bias and the need for prospective image dataset curation and external validation remain significant challenges.
Hausser [9] highlights the role of explainable AI (XAI) in skin cancer detection. He emphasizes the need to systematically evaluate XAI methods. This would improve the interpretability and reliability of AI-driven diagnoses. XAI shows promise. But, its impact on clinical decision-making is still underexplored. Takiddin [10] and Khattar et al. [11] critically analyzes existing CAD systems and AI for skin lesions. It emphasizes the need for good preprocessing, segmentation, and feature extraction. These studies highlight ongoing challenges in skin lesion analysis. Thick hairs can obstruct accurate segmentation. We must refine these technologies for real-time clinical use.
Finally, Painuli et al. [12] surveys recent ML and DL advances in cancer, including skin cancer. It provides a broader perspective. The review shows that AI can accurately detect cancer. It is a valuable tool for helping medical professionals. It also highlights the need to improve and test these technologies. We must ensure they work well for different patients and in various clinics. This field’s research needs advanced deep learning models and large, diverse datasets to train them effectively. Also, rigorous validation and testing are needed. They ensure the system’s reliability and safety.
This study aims to analyze publication trends in deep learning-based cancer classification. It will use bibliometric analysis. The database was collected through Scopus over a period of 2019-2024. The review covers both quantitative and content analysis. We intend to share insights into comparable research.

2. Material and Methods

2.1. Data Collection

All papers included in this study were sourced from the Scopus database using the following search string:
“image* AND ( melanoma OR ( "skin cancer" ) OR ( "skin lesion" ) ) AND ( deep OR convolution* OR cnn ) AND ( classification OR detection OR segmentation OR recognition OR diagnosis ) AND ( "Sydney Melanoma" OR "International Skin Imaging Collaboration" OR isic* OR "ham*" OR ph* OR isbi* OR dermnet OR dermis OR dermofit OR dermquest OR bcn* OR skinl2 OR med-node OR msk OR uda OR derm7pt OR “pad-ufes-20" )”,
which was utilized across all article title, abstract, and keywords. This search resulted in 2,175 documents. Publications from the year 2024 were omitted due to the ongoing increase in their number. By the time this search criteria was applied on November 8, 2024,
The publication applied several phases of filtering criteria. The research used only sources from English-language journals and conference proceedings. The types of documents considered were articles, conference papers, and reviews. After applying these criteria, 1,246 documents published between 2019 and 2024 were obtained. Additionally, bibliometric data such as author names, titles, publication years, source titles, document types, affiliations, publishers, abstracts, author keywords, and indexed keywords were collected. The data was then exported in a comma-separated values (CSV) format file for analysis. Figure 1 shows the flowchart of the exclusion process.

2.2. Data Exclusion

It became necessary to eliminate some publications in order to get data appropriate for this research. Only English-language articles are used in this study. Several articles in Portuguese, Persian, and Russian were discovered throughout the search procedure. Publications that are derived from books or book series are not employed either. Documents in the form of book chapters, letters, erratum, retracted publications, and notes are also excluded based on the type of document.

2.3. Data Analysis

We analyzed the data using numerous applications, including VOSviewer (version 1.6.18) and Bibliometrix. These apps help to visualize the most recent research. Some data is tabulated to provide more detailed information.

3. Results

3.1. Publication Trends

Based on the publications filtered out through the exclusion process, the trend in the number of publications has increased over the past 5 years. There were 99 articles published in 2019. Until 2024, the average number of articles increased by 40% to 60% each year. In 2024, the number of articles reach 444 articles. The increasing trend is shown in Figure 2. Most of the articles obtained came from journals with 1,098 articles and conference papers with 556 articles as seen in Table 1.
Table 2 shows the top-15 journal sources most often referred to by authors. IEEE access is ranked first in publishing journal articles. These journals are indexed in reputable international databases, such as IEEE, Sciencedirect, Springer, Multidisciplinary Digital Publishing Institute (MDPI), John Wiley, Tech Science Press, and Frontiers Media. Several journals are the author’s targets in publishing articles because they are very relevant to the topic of cancer detection, namely Compute in Biology and Medicine, Biomedical Signal Processing and Control, Computer Methods and Programs in Biomedicine, Cancers, Medical Image Analysis, IEEE Journal of Biomedical and Health Informatics, and Frontiers in Medicine.

3.2. Countries Distribution

India is the country with the most authors on this topic, with 316 articles. Followed by China, US, Saudi Arabia, and Pakistan. Table 3 displays data on the countries that produce the highest number of articles related to cancer detection using deep learning. Based on Table 6, Dept. of Dermatology dominates as the top 5 affiliates in publishing articles on skin cancer detection using deep learning [9,13,14,15,16,17,18]. The top-10 most frequently cited countries are depicted in Figure 3. China has the highest number of citations (3,995), followed by Pakistan (1,480), India (1,390), the United States (1,133), and Germany (1,087).

3.3. Authors Analysis

Table 4 shows the top-10 productive authors during this 5-year period based on Scopus search result. Khan, M.A. is the author with the most publications, with a total of 20 articles. A few authors—Khan, M.A., Akram, T., Sharif, M., and Kadry, S. from Pakistan—as well as Brinker, T.J., Utikal, J.S., and Hekler, A. from Germany—are colleagues.

3.4. References

Table 5 shows the top-10 most frequently cited journal articles which are dominated by articles indexed in IEEE. Most of these journals are for the field of image in health, such as IEEE transactions on medical imaging, European Journal of Cancer, Computer Methods and Programs in Biomedicine, IEEE Journal of Biomedical and Health Informatics, and Medical Image Analysis. The discussion of these ten articles can be grouped into several parts, namely.

3.4.1. Advancements in Semi-Supervised Learning for Medical Image Segmentation

Xiaomeng Li [65] highlights how semi-supervised methods outperform traditional supervised approaches by leveraging unlabeled data and improving performance on segmentation tasks. This is reinforced by the transformation-consistent scheme (TCSM), which enhances regularization and leads to a 4.07% improvement in the Jaccard index (JA) and 3.47% in the Dice coefficient (DI). The success of these methods is evident not only in skin lesion segmentation but also in OD segmentation from retinal fundus images and liver segmentation from CT volumes, demonstrating broad applicability in medical imaging.
Table 6. The top-15 of the most productive organizations related to skin cancer detection using deep learning.
Table 6. The top-15 of the most productive organizations related to skin cancer detection using deep learning.
No Organization #Docs
1 Skin Cancer Unit, German Cancer Research Center (DKFZ),
Heidelberg, Germany
7
2 Dept. of Dermatology, Heidelberg University,
Mannheim, Germany
6
3 Dept. of Dermatology, University Hospital Essen,
Essen, Germany
6
4 Chitkara University Institute of Engineering and Technology,
Chitkara University, Punjab, India
5
5 Dept. of Dermatology, University Hospital Regensburg,
Regensburg, Germany
5

3.4.2. State-of-the-Art Performance in Segmentation and Classification

Mutual Bootstrapping Deep Convolutional Neural Networks (MB-DCNN) [66] model stands out in skin lesion segmentation with a Jaccard index of 80.4% to 89.4% and substantial improvements over state-of-the-art models in both segmentation and classification. This model also incorporates multi-task learning and a coarse lesion mask, which enhances lesion localization. It outperforms fully-supervised models and state-of-the-art classification methods with significant performance gains (up to 7.5% in classification and 2.9% in segmentation), particularly on ISIC-2017 and PH2 datasets.
Similarly, the FAT-Net [67] model utilizes feature-adaptive transformers and memory-efficient strategies to achieve superior accuracy and inference speed in skin lesion segmentation. By capturing both local and global context through its architecture, FAT-Net excels in segmentation tasks. These innovations showcase how modern architectures leverage attention mechanisms to improve receptive fields and performance in medical image analysis.
Another noteworthy development is the application of deep-learning algorithms in classification, such as a CNN that outperforms 136 dermatologists in melanoma image classification [18], achieving a sensitivity of 87.5% and specificity of 86.5%. Such results suggest that AI can play a pivotal role in assisting or even outperforming human experts in specific diagnostic tasks, reinforcing the value of AI-driven solutions in healthcare.
The trend continues with a proposed classification method by Khalid et al. [68] that excels across datasets like DermIS-DermQuest and MED-NODE, achieving accuracy rates of 96.86% and 97.70% respectively. The method consistently delivers high classification performance without the need for image enhancement, achieving 95.91% accuracy on the ISIC dataset.
The use of balanced and augmented data significantly improves deep learning classification networks [71]. For instance, an integrated diagnostic framework enhanced the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in F1-score for benign and malignant cases, respectively, on the ISIC 2016 test dataset. Similarly, balanced and segmented training data has led to a 6.38% increase in the F1-score compared to imbalanced data, underscoring the importance of data quality in model performance.

3.4.3. Ensemble and Attention-Based Methods Enhancing Accuracy

The ARL-CNN [69] model achieves superior classification by focusing on semantically meaningful regions of skin lesions, outperforming models like ResNet on the ISIC 2017 dataset. The success of attention learning in these models suggests that focusing on critical regions significantly boosts performance in challenging classification tasks.
Ensemble methods, explored in [73], also prove effective in segmentation, improving key metrics like Jaccard Similarity Index (JSI), Dice Similarity Coefficient, and Matthew Correlation Coefficient (MCC). These methods demonstrate that by combining models and optimizing loss functions (e.g., using momentum and cross-entropy), segmentation performance can be further elevated.

3.4.4. Impact Beyond Skin Lesions

Beyond skin lesion segmentation, the innovations discussed in these sources extend to radiology, digital pathology, and genomic data integration. Techniques like the CA-Net [70] architecture emphasize the potential for explainability and accuracy, integrating spatial, channel, and scale attention modules, which can be applied to multi-organ segmentation and other imaging tasks.
The field is moving towards integrating imaging with clinical and genomic data, a trend emphasized by the Society for Imaging Informatics in Medicine [72]. This cross-disciplinary approach will lead to improvements in precision medicine, leveraging advancements in segmentation algorithms and the use of semi-supervised learning for reducing annotation burdens.
The combination of semi-supervised methods, attention mechanisms, ensemble techniques, and transformer-based architectures has driven recent state-of-the-art performance in medical image segmentation and classification. Models like MB-DCNN, FAT-Net, and ARL-CNN demonstrate the power of integrating both labeled and unlabeled data, advanced regularization schemes, and context-aware architectures. These innovations have resulted in measurable improvements in accuracy, specificity, and sensitivity, significantly advancing the capabilities of deep learning in medical imaging tasks across various domains.

3.5. Keywords and Research Trends

The overlay visualization based on the author’s keywords is displayed in Figure 4. Prominent terms like deep learning, skin cancer, and medical image segmentation imply that the most commonly used keywords by authors in a certain field of research are likely connected to machine learning, and medical image processing. The size of the nodes indicates how frequently a given keyword is used, and each node represents a different keyword. Bigger nodes such as skin cancer and deep learning imply that these are important areas of study.
Co-occurrence relationships are represented by the edges (lines joining nodes), which display the frequency with which two keywords occur together in the same articles. Topics are discussed together more frequently the more related the nodes are. For instance, there is a high correlation between deep learning, skin cancer, dermoscopic images, and medical image segmentation.
The gradient of colors, which goes from dark blue to yellow, shows how these keywords have changed over time. Lighter nodes (yellow/green) indicate more recent terms that have developed in the field, whereas darker nodes (blue/purple) represent older keywords that first appeared in study earlier. For example, monkeypox and melanoma classification are relatively new entries, as evidenced by their lighter colors.
Figure 5 shows a thematic map, which categorizes research themes based on their development degree (density) and relevance degree (centrality). This type of map is commonly used in bibliometric analysis to visualize the structure and evolution of research topics in a given field. The upper right quadrant, known as Motor Themes, includes topics that are already well-developed and very important to the field. Themes such as support vector machine, skin disease, and vgg16 are included here, indicating that these topics are in the center of attention and have significant development. Meanwhile, the upper left quadrant, called Niche Themes, contains topics that are already quite developed but have lower relevance than the main theme. For example, topics such as dilated convolution, semi-supervised learning and basal cell carcinoma are in this area, indicating that they are more specific and focused themes but are not very related to broader research. The lower right quadrant, called Basic Themes, includes topics that are very relevant but not very developed. These themes, such as cnn, medical image segmentation, and transfer learning, are the foundation of research in the field and are still evolving.
Finally, the bottom left quadrant, known as Emerging or Declining Themes, shows topics that have low relevance and development. This could reflect emerging or declining topics. There are no terms in the image that correspond to that quadrant.

4. Conclusions

A comprehensive search of the Scopus database was conducted to identify relevant publications, which were subsequently analyzed through both quantitative methods and content analysis. Over the past five years, there has been a notable and consistent increase in the volume of published articles, reflecting the field’s ongoing expansion and its potential future impact. Contributions to the literature are made not only by scholars in computer science but also by professionals in the medical field. The leading countries in terms of publication output are China, India, and the United States. Prominent research areas that warrant further exploration include dilated convolution, attention mechanisms, and medical image segmentation. This study concludes that deep learning techniques have shown promising results in skin cancer detection, and the field is rapidly growing. However, there are still challenges and limitations that need to be addressed, such as the need for high-quality image datasets and standardized reporting.

References

  1. Furriel, B.C.R.S.; Oliveira, B.D.; Prôa, R.; Paiva, J.Q.; Loureiro, R.M.; Calixto, W.P.; Reis, M.R.C.; Giavina-Bianchi, M. Artificial intelligence for skin cancer detection and classification for clinical environment: a systematic review. Frontiers in Medicine 2024, 10, 1305954. [Google Scholar] [CrossRef] [PubMed]
  2. Brancaccio, G.; Balato, A.; Malvehy, J.; Puig, S.; Argenziano, G.; Kittler, H. Artificial Intelligence in Skin Cancer Diagnosis: A Reality Check. Journal of Investigative Dermatology 2024, 144, 492–499. [Google Scholar] [CrossRef] [PubMed]
  3. Wei, M.L.; Tada, M.; So, A.; Torres, R. Artificial intelligence and skin cancer. Frontiers in Medicine 2024, 11, 1331895. [Google Scholar] [CrossRef] [PubMed]
  4. Celebi, M.E.; Codella, N.; Halpern, A. Dermoscopy Image Analysis: Overview and Future Directions. IEEE Journal of Biomedical and Health Informatics 2019, 23, 474–478. [Google Scholar] [CrossRef]
  5. Stafford, H.; Buell, J.; Chiang, E.; Ramesh, U.; Migden, M.; Nagarajan, P.; Amit, M.; Yaniv, D. Non-Melanoma Skin Cancer Detection in the Age of Advanced Technology: A Review. Cancers 2023, 15, 3094. [Google Scholar] [CrossRef]
  6. Debelee, T.G. Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review. Diagnostics 2023, 13. [Google Scholar] [CrossRef]
  7. Chu, Y.S.; An, H.G.; Oh, B.H.; Yang, S. Artificial Intelligence in Cutaneous Oncology. Frontiers in Medicine 2020, 7, 318. [Google Scholar] [CrossRef]
  8. Choy, S.P.; Kim, B.J.; Paolino, A.; Tan, W.R.; Lim, S.M.L.; Seo, J.; Tan, S.P.; Francis, L.; Tsakok, T.; Simpson, M.; Barker, J.N.W.N.; Lynch, M.D.; Corbett, M.S.; Smith, C.H.; Mahil, S.K. Systematic review of deep learning image analyses for the diagnosis and monitoring of skin disease. npj Digital Medicine 2023, 6, 180. [Google Scholar] [CrossRef]
  9. Hauser, K.; Kurz, A.; Haggenmüller, S.; Maron, R.C.; Von Kalle, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M.; Hauschild, A.; French, L.E.; Heinzerling, L.; Schlager, J.G.; Ghoreschi, K.; Schlaak, M.; Hilke, F.J.; Poch, G.; Kutzner, H.; Berking, C.; Heppt, M.V.; Erdmann, M.; Haferkamp, S.; Schadendorf, D.; Sondermann, W.; Goebeler, M.; Schilling, B.; Kather, J.N.; Fröhling, S.; Lipka, D.B.; Hekler, A.; Krieghoff-Henning, E.; Brinker, T.J. Explainable artificial intelligence in skin cancer recognition: A systematic review. European Journal of Cancer 2022, 167, 54–69. [Google Scholar] [CrossRef]
  10. Takiddin, A.; Schneider, J.; Yang, Y.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Skin Cancer Detection: Scoping Review. Journal of Medical Internet Research 2021, 23, e22934. [Google Scholar] [CrossRef]
  11. Khattar, S.; Kaur, R. Computer assisted diagnosis of skin cancer: A survey and future recommendations. Computers and Electrical Engineering 2022, 104, 108431. [Google Scholar] [CrossRef]
  12. Painuli, D.; Bhardwaj, S.; köse, U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Computers in Biology and Medicine 2022, 146, 105580. [Google Scholar] [CrossRef] [PubMed]
  13. Brinker, T.J.; Hekler, A.; Enk, A.H.; Von Kalle, C. Enhanced classifier training to improve precision of a convolutional neural network to identify images of skin lesions. PLOS ONE 2019, 14, e0218713. [Google Scholar] [CrossRef] [PubMed]
  14. Maron, R.C.; Hekler, A.; Krieghoff-Henning, E.; Schmitt, M.; Schlager, J.G.; Utikal, J.S.; Brinker, T.J. Reducing the Impact of Confounding Factors on Skin Cancer Classification via Image Segmentation: Technical Model Study. Journal of Medical Internet Research 2021, 23, e21695. [Google Scholar] [CrossRef]
  15. Schneider, L.; Wies, C.; Krieghoff-Henning, E.I.; Bucher, T.C.; Utikal, J.S.; Schadendorf, D.; Brinker, T.J. Multimodal integration of image, epigenetic and clinical data to predict BRAF mutation status in melanoma. European Journal of Cancer 2023, 183, 131–138. [Google Scholar] [CrossRef]
  16. Brinker, T.J.; Hekler, A.; Hauschild, A.; Berking, C.; Schilling, B.; Enk, A.H.; Haferkamp, S.; Karoglan, A.; Von Kalle, C.; Weichenthal, M.; Sattler, E.; Schadendorf, D.; Gaiser, M.R.; Klode, J.; Utikal, J.S. Comparing artificial intelligence algorithms to 157 German dermatologists: the melanoma classification benchmark. European Journal of Cancer 2019, 111, 30–37. [Google Scholar] [CrossRef]
  17. Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Holland-Letz, T.; Utikal, J.S.; Von Kalle, C.; Ludwig-Peitsch, W.; Sirokay, J.; Heinzerling, L.; Albrecht, M.; Baratella, K.; Bischof, L.; Chorti, E.; Dith, A.; Drusio, C.; Giese, N.; Gratsias, E.; Griewank, K.; Hallasch, S.; Hanhart, Z.; Herz, S.; Hohaus, K.; Jansen, P.; Jockenhöfer, F.; Kanaki, T.; Knispel, S.; Leonhard, K.; Martaki, A.; Matei, L.; Matull, J.; Olischewski, A.; Petri, M.; Placke, J.M.; Raub, S.; Salva, K.; Schlott, S.; Sody, E.; Steingrube, N.; Stoffels, I.; Ugurel, S.; Zaremba, A.; Gebhardt, C.; Booken, N.; Christolouka, M.; Buder-Bakhaya, K.; Bokor-Billmann, T.; Enk, A.; Gholam, P.; Hänßle, H.; Salzmann, M.; Schäfer, S.; Schäkel, K.; Schank, T.; Bohne, A.S.; Deffaa, S.; Drerup, K.; Egberts, F.; Erkens, A.S.; Ewald, B.; Falkvoll, S.; Gerdes, S.; Harde, V.; Hauschild, A.; Jost, M.; Kosova, K.; Messinger, L.; Metzner, M.; Morrison, K.; Motamedi, R.; Pinczker, A.; Rosenthal, A.; Scheller, N.; Schwarz, T.; Stölzl, D.; Thielking, F.; Tomaschewski, E.; Wehkamp, U.; Weichenthal, M.; Wiedow, O.; Bär, C.M.; Bender-Säbelkampf, S.; Horbrügger, M.; Karoglan, A.; Kraas, L.; Faulhaber, J.; Geraud, C.; Guo, Z.; Koch, P.; Linke, M.; Maurier, N.; Müller, V.; Thomas, B.; Utikal, J.S.; Alamri, A.S.M.; Baczako, A.; Berking, C.; Betke, M.; Haas, C.; Hartmann, D.; Heppt, M.V.; Kilian, K.; Krammer, S.; Lapczynski, N.L.; Mastnik, S.; Nasifoglu, S.; Ruini, C.; Sattler, E.; Schlaak, M.; Wolff, H.; Achatz, B.; Bergbreiter, A.; Drexler, K.; Ettinger, M.; Haferkamp, S.; Halupczok, A.; Hegemann, M.; Dinauer, V.; Maagk, M.; Mickler, M.; Philipp, B.; Wilm, A.; Wittmann, C.; Gesierich, A.; Glutsch, V.; Kahlert, K.; Kerstan, A.; Schilling, B.; Schrüfer, P. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. European Journal of Cancer 2019, 113, 47–54. [Google Scholar] [CrossRef]
  18. Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Fröhling, S.; Utikal, J.S.; Von Kalle, C.; Ludwig-Peitsch, W.; Sirokay, J.; Heinzerling, L.; Albrecht, M.; Baratella, K.; Bischof, L.; Chorti, E.; Dith, A.; Drusio, C.; Giese, N.; Gratsias, E.; Griewank, K.; Hallasch, S.; Hanhart, Z.; Herz, S.; Hohaus, K.; Jansen, P.; Jockenhöfer, F.; Kanaki, T.; Knispel, S.; Leonhard, K.; Martaki, A.; Matei, L.; Matull, J.; Olischewski, A.; Petri, M.; Placke, J.M.; Raub, S.; Salva, K.; Schlott, S.; Sody, E.; Steingrube, N.; Stoffels, I.; Ugurel, S.; Sondermann, W.; Zaremba, A.; Gebhardt, C.; Booken, N.; Christolouka, M.; Buder-Bakhaya, K.; Bokor-Billmann, T.; Enk, A.; Gholam, P.; Hänßle, H.; Salzmann, M.; Schäfer, S.; Schäkel, K.; Schank, T.; Bohne, A.S.; Deffaa, S.; Drerup, K.; Egberts, F.; Erkens, A.S.; Ewald, B.; Falkvoll, S.; Gerdes, S.; Harde, V.; Hauschild, A.; Jost, M.; Kosova, K.; Messinger, L.; Metzner, M.; Morrison, K.; Motamedi, R.; Pinczker, A.; Rosenthal, A.; Scheller, N.; Schwarz, T.; Stölzl, D.; Thielking, F.; Tomaschewski, E.; Wehkamp, U.; Weichenthal, M.; Wiedow, O.; Bär, C.M.; Bender-Säbelkampf, S.; Horbrügger, M.; Karoglan, A.; Kraas, L.; Faulhaber, J.; Geraud, C.; Guo, Z.; Koch, P.; Linke, M.; Maurier, N.; Müller, V.; Thomas, B.; Utikal, J.S.; Alamri, A.S.M.; Baczako, A.; Berking, C.; Betke, M.; Haas, C.; Hartmann, D.; Heppt, M.V.; Kilian, K.; Krammer, S.; Lapczynski, N.L.; Mastnik, S.; Nasifoglu, S.; Ruini, C.; Sattler, E.; Schlaak, M.; Wolff, H.; Achatz, B.; Bergbreiter, A.; Drexler, K.; Ettinger, M.; Haferkamp, S.; Halupczok, A.; Hegemann, M.; Dinauer, V.; Maagk, M.; Mickler, M.; Philipp, B.; Wilm, A.; Wittmann, C.; Gesierich, A.; Glutsch, V.; Kahlert, K.; Kerstan, A.; Schilling, B.; Schrüfer, P. A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. European Journal of Cancer 2019, 111, 148–154. [Google Scholar] [CrossRef]
  19. Attique Khan, M.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. International Journal of Intelligent Systems 2022, 37, 10621–10649. [Google Scholar] [CrossRef]
  20. Zahoor, S.; Lali, I.U.; Khan, M.A.; Javed, K.; Mehmood, W. Breast cancer detection and classification using traditional computer vision techniques: A comprehensive review. Current Medical Imaging 2020, 16, 1187–1200. [Google Scholar] [CrossRef]
  21. Malik, S.; Akram, T.; Awais, M.; Khan, M.A.; Hadjouni, M.; Elmannai, H.; Alasiry, A.; Marzougui, M.; Tariq, U. An Improved Skin Lesion Boundary Estimation for Enhanced-Intensity Images Using Hybrid Metaheuristics. Diagnostics 2023, 13. [Google Scholar] [CrossRef] [PubMed]
  22. Saba, T.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. Journal of Medical Systems 2019, 43. [Google Scholar] [CrossRef] [PubMed]
  23. Bibi, S.; Khan, M.A.; Shah, J.H.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics 2023, 13. [Google Scholar] [CrossRef] [PubMed]
  24. Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Younus Javed, M.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Computational Intelligence and Neuroscience 2021, 2021. [Google Scholar] [CrossRef]
  25. Nawaz, M.; Nazir, T.; Khan, M.A.; Alhaisoni, M.; Kim, J.Y.; Nam, Y. MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy K -Means Clustering. Computational and Mathematical Methods in Medicine 2022, 2022. [Google Scholar] [CrossRef]
  26. Nawaz, M.; Nazir, T.; Masood, M.; Ali, F.; Khan, M.A.; Tariq, U.; Sahar, N.; Damaševičius, R. Melanoma segmentation: A framework of improved DenseNet77 and UNET convolutional neural network. International Journal of Imaging Systems and Technology 2022, 32, 2137–2153. [Google Scholar] [CrossRef]
  27. Hussain, M.; Khan, M.A.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics 2023, 13. [Google Scholar] [CrossRef]
  28. Ahmad, N.; Shah, J.H.; Khan, M.A.; Baili, J.; Ansari, G.J.; Tariq, U.; Kim, Y.J.; Cha, J.H. A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI. Frontiers in Oncology 2023, 13. [Google Scholar] [CrossRef]
  29. Iqbal, A.; Sharif, M.; Khan, M.A.; Nisar, W.; Alhaisoni, M. FF-UNet: a U-Shaped Deep Convolutional Neural Network for Multimodal Biomedical Image Segmentation. Cognitive Computation 2022, 14, 1287–1302. [Google Scholar] [CrossRef]
  30. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Albuquerque, V.H.C.D. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE Journal of Biomedical and Health Informatics 2021, 25, 4267–4275. [Google Scholar] [CrossRef]
  31. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11. [Google Scholar] [CrossRef] [PubMed]
  32. Khan, M.A.; Akram, T.; Zhang, Y.D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognition Letters 2021, 143, 58–66. [Google Scholar] [CrossRef]
  33. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Rashid, M.; Bukhari, S.A.C. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Computing and Applications 2020, 32, 15929–15948. [Google Scholar] [CrossRef]
  34. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognition Letters 2020, 129, 293–303. [Google Scholar] [CrossRef]
  35. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22. [Google Scholar] [CrossRef]
  36. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Jude Hemanth, D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2022, 202, 88–102. [Google Scholar] [CrossRef]
  37. Khan, M.A.; Akram, T.; Zhang, Y.D.; Alhaisoni, M.; Al Hejaili, A.; Shaban, K.A.; Tariq, U.; Zayyan, M.H. SkinNet-ENDO: Multiclass skin lesion recognition using deep neural network and Entropy-Normal distribution optimization algorithm with ELM. International Journal of Imaging Systems and Technology 2023, 33, 1275–1292. [Google Scholar] [CrossRef]
  38. Khan, M.A.; Akram, T.; Sharif, M.; Kadry, S.; Nam, Y. Computer Decision Support System for Skin Cancer Localization and Classification. Computers, Materials and Continua 2021, 68, 1041–1064. [Google Scholar] [CrossRef]
  39. Malik, S.; Akram, T.; Ashraf, I.; Rafiullah, M.; Ullah, M.; Tanveer, J. A Hybrid Preprocessor DE-ABC for Efficient Skin-Lesion Segmentation with Improved Contrast. Diagnostics 2022, 12. [Google Scholar] [CrossRef]
  40. Malik, S.; Islam, S.M.R.; Akram, T.; Naqvi, S.R.; Alghamdi, N.S.; Baryannis, G. A novel hybrid meta-heuristic contrast stretching technique for improved skin lesion segmentation. Computers in Biology and Medicine 2022, 151. [Google Scholar] [CrossRef]
  41. Akram, T.; Lodhi, H.M.J.; Naqvi, S.R.; Naeem, S.; Alhaisoni, M.; Ali, M.; Haider, S.A.; Qadri, N.N. A multilevel features selection framework for skin lesion classification. Human-centric Computing and Information Sciences 2020, 10. [Google Scholar] [CrossRef]
  42. Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
  43. Kaur, R.; Gholamhosseini, H.; Sinha, R. Synthetic Images Generation Using Conditional Generative Adversarial Network for Skin Cancer Classification. 2021, Vol. 2021-December, p. 381 – 386. Cited by: 6. [CrossRef]
  44. Ali, A.A.; Taha, R.E.; Kaur, R.; Afifi, S.M. Multi-Class Classification of Melanoma on an Edge Device. 2023, p. 46 – 51. Cited by: 0. [CrossRef]
  45. Dawod, M.I.; Taha, R.; Kaur, R.; Afifi, S.M. Real-time Classification of Skin Cancer on an Edge Device. 2023, p. 184 – 191. Cited by: 0. [CrossRef]
  46. Kaur, R.; Gholamhosseini, H.; Sinha, R.; Lindén, M. Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. Sensors 2022, 22. [Google Scholar] [CrossRef] [PubMed]
  47. Kaur, R.; GholamHosseini, H.; Sinha, R. Hairlines removal and low contrast enhancement of melanoma skin images using convolutional neural network with aggregation of contextual information. Biomedical Signal Processing and Control 2022, 76. [Google Scholar] [CrossRef]
  48. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Medical Imaging 2022, 22. [Google Scholar] [CrossRef]
  49. Kaur, R.; Hosseini, H.G.; Sinha, R. Lesion Border Detection of Skin Cancer Images Using Deep Fully Convolutional Neural Network with Customized Weights. 2021, p. 3035 – 3038. Cited by: 2. [CrossRef]
  50. Kaur, R.; GholamHosseini, H.; Sinha, R. Skin lesion segmentation using an improved framework of encoder-decoder based convolutional neural network. International Journal of Imaging Systems and Technology 2022, 32, 1143–1158. [Google Scholar] [CrossRef]
  51. Kaur, R.; Gholamhosseini, H. Analyzing the Impact of Image Denoising and Segmentation on Melanoma Classification Using Convolutional Neural Networks. 2023. Cited by: 0. [CrossRef]
  52. Fogelberg, K.; Chamarthi, S.; Maron, R.C.; Niebling, J.; Brinker, T.J. Domain shifts in dermoscopic skin cancer datasets: Evaluation of essential limitations for clinical translation. New Biotechnology 2023, 76, 106–117. [Google Scholar] [CrossRef]
  53. Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin lesion segmentation and classification using conventional and deep learning based framework. Computers, Materials and Continua 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
  54. Kalyani, K.; Althubiti, S.A.; Ahmed, M.A.; Lydia, E.L.; Kadry, S.; Han, N.; Nam, Y. Arithmetic Optimization with Ensemble Deep Transfer Learning Based Melanoma Classification. Computers, Materials and Continua 2023, 75, 149–164. [Google Scholar] [CrossRef]
  55. Kadry, S.; Taniar, D.; Damasevicius, R.; Rajinikanth, V.; Lawal, I.A. Extraction of Abnormal Skin Lesion from Dermoscopy Image using VGG-SegNet. 2021. Cited by: 36. [CrossRef]
  56. Cheng, X.; Kadry, S.; Meqdad, M.N.; Crespo, R.G. CNN supported framework for automatic extraction and evaluation of dermoscopy images. Journal of Supercomputing 2022, 78, 17114–17131. [Google Scholar] [CrossRef]
  57. Jiang, Y.; Dong, J.; Cheng, T.; Zhang, Y.; Lin, X.; Liang, J. iU-Net: a hybrid structured network with a novel feature fusion approach for medical image segmentation. BioData Mining 2023, 16. [Google Scholar] [CrossRef] [PubMed]
  58. Zhang, Z.; Jiang, Y.; Qiao, H.; Wang, M.; Yan, W.; Chen, J. SIL-Net: A Semi-Isotropic L-shaped network for dermoscopic image segmentation. Computers in Biology and Medicine 2022, 150. [Google Scholar] [CrossRef] [PubMed]
  59. Jiang, Y.; Qiao, H.; Zhang, Z.; Wang, M.; Yan, W.; Chen, J. MDSC-Net: A multi-scale depthwise separable convolutional neural network for skin lesion segmentation. IET Image Processing 2023, 17, 3713–3727. [Google Scholar] [CrossRef]
  60. Jiang, Y.; Cao, S.; Tao, S.; Zhang, H. Skin Lesion Segmentation Based on Multi-Scale Attention Convolutional Neural Network. IEEE Access 2020, 8, 122811–122825. [Google Scholar] [CrossRef]
  61. Jiang, Y.; Dong, J.; Zhang, Y.; Cheng, T.; Lin, X.; Liang, J. PCF-Net: Position and context information fusion attention convolutional neural network for skin lesion segmentation. Heliyon 2023, 9. [Google Scholar] [CrossRef]
  62. Jiang, Y.; Cheng, T.; Dong, J.; Liang, J.; Zhang, Y.; Lin, X.; Yao, H. Dermoscopic image segmentation based on Pyramid Residual Attention Module. PLoS ONE 2022, 17. [Google Scholar] [CrossRef]
  63. Jiang, Y.; Wang, M.; Zhang, Z.; Qiao, H.; Yan, W.; Chen, J. CTDS-Net:CNN-Transformer Fusion Network for Dermoscopic Image Segmentation. 2023, p. 141 – 150. Cited by: 0. [CrossRef]
  64. Maron, R.C.; Haggenmüller, S.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hauschild, A.; French, L.E.; Schlaak, M.; Ghoreschi, K.; Kutzner, H.; Heppt, M.V.; Haferkamp, S.; Sondermann, W.; Schadendorf, D.; Schilling, B.; Hekler, A.; Krieghoff-Henning, E.; Kather, J.N.; Fröhling, S.; Lipka, D.B.; Brinker, T.J. Robustness of convolutional neural networks in recognition of pigmented skin lesions. European Journal of Cancer 2021, 145, 81–91. [Google Scholar] [CrossRef]
  65. Li, X.; Yu, L.; Chen, H.; Fu, C.W.; Xing, L.; Heng, P.A. Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation. IEEE Transactions on Neural Networks and Learning Systems 2021, 32, 523–534. [Google Scholar] [CrossRef]
  66. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE Transactions on Medical Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef]
  67. Wu, H.; Chen, S.; Chen, G.; Wang, W.; Lei, B.; Wen, Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Medical Image Analysis 2022, 76, 102327. [Google Scholar] [CrossRef]
  68. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLOS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed]
  69. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Transactions on Medical Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef] [PubMed]
  70. Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; Zhang, S. CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation. IEEE Transactions on Medical Imaging 2021, 40, 699–711. [Google Scholar] [CrossRef] [PubMed]
  71. Al-masni, M.A.; Kim, D.H.; Kim, T.S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Computer Methods and Programs in Biomedicine 2020, 190, 105351. [Google Scholar] [CrossRef]
  72. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; Huang, K.; Nikita, K.S.; Veasey, B.P.; Zervakis, M.; Saltz, J.H.; Pattichis, C.S. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE Journal of Biomedical and Health Informatics 2020, 24, 1837–1857. [Google Scholar] [CrossRef]
  73. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images With Ensemble Deep Learning Methods. IEEE Access 2020, 8, 4171–4181. [Google Scholar] [CrossRef]
Figure 1. The flow of exclusion criteria.
Figure 1. The flow of exclusion criteria.
Preprints 143028 g001
Figure 2. The number of publication over the year 2019-2024.
Figure 2. The number of publication over the year 2019-2024.
Preprints 143028 g002
Figure 3. Most cited countries.
Figure 3. Most cited countries.
Preprints 143028 g003
Figure 4. Overlay visualisation based on the author’s keywords. The minimum number of occurrences is five, and 99 of the 2021 keywords meet the threshold.
Figure 4. Overlay visualisation based on the author’s keywords. The minimum number of occurrences is five, and 99 of the 2021 keywords meet the threshold.
Preprints 143028 g004
Figure 5. Thematic map.
Figure 5. Thematic map.
Preprints 143028 g005
Table 1. The number of documents based on document type.
Table 1. The number of documents based on document type.
No Document Type #Docs
1 Article 1,098
2 Conference paper 556
3 Review 41
4 Letter 1
5 Note 1
Table 2. The top-15 of the most productive journals related to skin cancer detection using deep learning.
Table 2. The top-15 of the most productive journals related to skin cancer detection using deep learning.
No Source #Docs
1 IEEE Access (IEEE) 40
2 Computers in Biology and Medicine (Elsevier) 35
3 Diagnostics (MDPI) 34
4 Multimedia Tools and Applications (Springer) 29
5 Biomedical Signal Processing and Control (Elsevier) 25
6 Computer Methods and Programs in Biomedicine (Elsevier) 23
7 Sensors (MDPI) 16
8 Cancers (MDPI) 14
9 International Journal of Imaging Systems and Technology
(John Wiley and Son Inc.)
14
10 Applied Sciences (MDPI) 13
11 Medical Image Analysis (Elsevier) 13
12 Expert Systems with Applications (Elsevier) 12
13 Computers, Materials and Continua (Tech Science Press) 12
14 IEEE Journal of Biomedical and Health Informatics (IEEE) 12
15 Frontiers in Medicine (Frontiers Media) 9
Table 3. Fifteen most productive countries.
Table 3. Fifteen most productive countries.
No Country #Docs
1 India 316
2 China 275
3 United States 155
4 Saudi Arabia 87
5 Pakistan 84
6 United Kingdom 66
7 South Korea 49
8 Egypt 45
9 Germany 45
10 Canada 41
11 Turkey 40
12 Australia 39
13 Bangladesh 38
14 Spain 34
15 Italy 32
Table 4. The top-10 productive authors.
Table 4. The top-10 productive authors.
No Author Num. of Docs. Documents Country
1 Khan, M.A. 20 [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36] Pakistan
2 Akram, T. 12 [19,21,30,31,32,33,34,37,38,39,40,41] Pakistan
3 Sharif, M. 9 [29,30,31,32,33,34,35,38,42] Pakistan
4 Kaur, R. 9 [43,44,45,46,47,48,49,50,51] New Zealand
5 Brinker, T.J. 9 [9,13,14,15,16,17,18,52] Germany
6 Utikal, J.S. 7 [9,14,15,16,17,18] Germany
7 Tariq, U. 7 [21,24,26,28,35,37,53] Saudi Arabia
8 Kadry, S. 7 [19,24,38,42,54,55,56] Norway
9 Jiang, Y. 7 [57,58,59,60,61,62,63] China
10 Hekler, A. 7 [9,13,14,16,17,18,64] Germany
Table 5. The top-10 of the most cited publications.
Table 5. The top-10 of the most cited publications.
No Title Year Source #Cit
1 Attention Residual Learning for Skin Lesion Classification [69] 2019 IEEE Transactions on Medical Imaging 370
2 Ca-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation [70] 2021 IEEE Transactions on Medical Imaging 343
3 Deep Learning Outperformed 136 Of 157 Dermatologists in A Head-To-Head Dermoscopic Melanoma Image Classification Task [17] 2019 European Journal of Cancer 287
4 Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation [65] 2021 IEEE Transactions on Neural Networks and Learning Systems 247
5 Classification of Skin Lesions using Transfer Learning and Augmentation with Alex-Net [68] 2019 PLoS ONE 223
6 Multiple Skin Lesions Diagnostics via Integrated Deep Convolutional Networks for Segmentation and Classification [71] 2020 Computer Methods and Programs in Biomedicine 221
7 A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification [66] 2020 IEEE Transactions on Medical Imaging 218
8 AI in Medical Imaging Informatics: Current Challenges and Future Directions [72] 2020 IEEE Journal of Biomedical and Health Informatics 205
9 Fat-Net: Feature Adaptive Transformers for Automated Skin Lesion Segmentation [67] 2022 Medical Image Analysis 198
10 Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods [73] 2020 IEEE Access 189
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated