Next Article in Journal
A Molecular Dynamics Simulation on the Methane Adsorption in Nanopores of Shale
Next Article in Special Issue
MedMAE: A Self-Supervised Backbone for Medical Imaging Tasks
Previous Article in Journal
Analysis of a Queueing Model with Flexible Priority, Batch Arrival, and Impatient Customers
Previous Article in Special Issue
A Deep Learning Model for Detecting Fake Medical Images to Mitigate Financial Insurance Fraud
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research

by
Catur Supriyanto
1,2,*,
Abu Salam
1,2,
Junta Zeniarja
1,2,
Danang Wahyu Utomo
1,2,
Ika Novita Dewi
1,2,
Cinantya Paramita
1,2,
Adi Wijaya
3 and
Noor Zuraidin Mohd Safar
4
1
Faculty of Computer Science, Universitas Dian Nuswantoro, Semarang 50131, Indonesia
2
Dinus Research Group for AI in Medical Science (DREAMS), Universitas Dian Nuswantoro, Semarang 50131, Indonesia
3
Department of Health Information Management, Universitas Indonesia Maju, Jakarta 12610, Indonesia
4
Faculty of Computer Science and Information Technology, Universiti Tun Hussein Onn Malaysia, Batu Pahat 86400, Malaysia
*
Author to whom correspondence should be addressed.
Computation 2025, 13(3), 78; https://doi.org/10.3390/computation13030078
Submission received: 17 December 2024 / Revised: 7 March 2025 / Accepted: 13 March 2025 / Published: 19 March 2025
(This article belongs to the Special Issue Computational Medical Image Analysis—2nd Edition)

Abstract

:
Early detection of skin cancer is crucial for successful treatment and improved patient outcomes. Medical images play a vital role in this process, serving as the primary data source for both traditional and modern diagnostic approaches. This study aims to provide an overview of the significant role of medical images in skin cancer detection and highlight developments in the use of deep learning for early diagnosis. The scope of this survey includes an in-depth exploration of state-of-the-art deep learning methods, an evaluation of public datasets commonly used for training and validation, and a bibliometric analysis of recent advancements in the field. This survey focuses on publications in the Scopus database from 2019 to 2024. The search string is used to find articles by their abstracts, titles, and keywords, and includes several public datasets, like HAM and ISIC, ensuring relevance to the topic. Filters are applied based on the year, document type, source type, and language. The analysis identified 1697 articles, predominantly comprising journal articles and conference proceedings. The analysis shows that the number of articles has increased over the past five years. This growth is driven not only by developed countries but also by developing countries. Dermatology departments in various hospitals play a significant role in advancing skin cancer detection methods. In addition to identifying publication trends, this study also reveals underexplored areas to encourage new explorations using the VOSviewer and Bibliometrix applications.

1. Introduction

The use of deep learning methods in medical image classification, segmentation, and detection has seen significant progress, enhancing diagnostic accuracy and efficiency. One innovation includes a semi-supervised method with difference-guided consistency for automatic segmentation of pericardial adipose tissue from cardiac MRI images, which improves the analysis accuracy in cardiac assessment [1]. In addition, foundation models used to enhance magnetic resonance imaging (MRI) images also support subsequent segmentation, registration, and diagnostic tasks, resulting in improved performance across a range of medical imaging processes [2]. A skull-stripping model that can be applied across the lifespan and leverages knowledge from a brain atlas has been proposed to improve the flexibility and accuracy of MRI analysis across age groups [3]. A deep learning-based approach for brain MRI sequence identification has also been developed, using models trained on large multicentric study sets, highlighting the potential of large datasets in refining artificial intelligence capabilities for precise sequence recognition [4]. These innovations demonstrate the profound impact of deep learning on medical imaging, promising improved clinical outcomes and more integrated healthcare delivery.
In the context of skin cancer detection, deep learning methods have enabled the development of more accurate and efficient classification systems. Medical images play a crucial role in enabling deep learning-based early detection systems for skin cancer. High-quality dermatoscopic and clinical images serve as the primary input for training deep learning models, particularly convolutional neural networks (CNNs). These images allow the models to learn and identify subtle patterns, such as changes in color, texture, shape, and asymmetry, which are early indicators of malignant lesions. By leveraging large datasets of medical images, AI systems can accurately classify skin lesions, distinguish between benign and malignant cases, and assist clinicians in detecting skin cancer at an early stage. This ensures faster diagnosis, improved efficiency in clinical workflows, and better patient outcomes. Incorporating medical images into deep learning frameworks enables standardized, objective, and scalable diagnostic tools, which are especially valuable in areas with limited access to dermatologists.
The progress in deep learning for skin cancer classification has seen rapid development in recent years, driven by the ability of convolutional neural networks (CNNs) to identify and classify skin lesions with accuracy comparable to, or even surpassing, that of dermatologists. CNNs are now being applied to segmentation, classification, and diagnosis of skin cancer, including melanoma and non-melanoma, by utilizing large publicly available datasets like HAM10000 and ISIC [5,6]. New models such as ensemble deep learning and attention-based approaches have enhanced performance in skin cancer classification by incorporating integrated features for earlier detection and more accurate diagnosis [7,8]. The application of IoT technology is also beginning to facilitate skin cancer diagnosis in both clinical and non-clinical environments [9]. All these advancements demonstrate the great potential of deep learning in improving the diagnosis and prognosis of skin cancer effectively and efficiently.
However, challenges still exist in the application of deep learning for skin cancer classification. One major challenge is the variation in data caused by differences in skin type, lighting, and image angle, which makes it difficult for models to generalize prediction results [10,11]. Class imbalance is also a significant problem as datasets are often dominated by benign lesions compared to cancerous lesions. This can be addressed by using transfer learning and ensemble neural networks, as well as applying generative adversarial networks (GANs) to generate synthetic images to improve class distribution, although the use of GANs can result in bias and domain shift [11,12,13]. In addition, dataset quality and size are important constraints as these limitations limit the model’s ability to accurately detect skin cancer. Collaboration across institutions to collect larger and more diverse datasets can be an effective solution [14,15]. Model interpretability is also a crucial issue, given that deep learning models are often considered “black boxes”, especially in the medical context. Interpretability techniques such as saliency maps and Grad-CAM can be used to understand and validate model decisions [15]. Another challenge is the generalization of models across different populations, where models trained on one population may not perform well on other populations due to genetic and environmental differences. The use of domain adaptation can reduce the effects of domain shift and bias in skin lesion datasets, improving model performance on more diverse populations [13]. Addressing these challenges can make deep learning more effective in skin cancer detection, improve diagnostic accuracy, and aid in patient management.
However, although there have been significant developments in the field of image recognition using deep learning, skin cancer detection is still a complex challenge. This challenge is primarily related to the wide variation in appearance of skin lesions as well as the ability to differentiate between benign and malignant lesions. Therefore, research on skin cancer detection using deep learning is very important. This method will improve accuracy and efficiency in detection. It will speed up the diagnosis and treatment of skin cancer patients.
In the past 5 years, several literature reviews have been published on skin cancer detection from different perspectives. There is much interest in using AI to diagnose skin cancer. Many studies have explored its potential to improve early detection and treatment. Brunna [16] highlights efforts to develop, test, and validate AI systems for detecting, diagnosing, and classifying skin cancer in clinical settings. She underlines the potential of AI to transform early diagnosis.
Yet, Brancaccio et al. [17] notes key limits in using AI for diagnosis. Many studies are of low quality, and there is a risk of missing melanomas. This underlines the need for rigorous validation and testing of AI models before they are widely adopted in clinical practice. Wei et al. [18] discusses the need to improve AI in clinical settings. It highlights human factors, privacy concerns, and the need for advanced learning methods. These include multimodal, incremental, and federated learning. These considerations are essential for ensuring that AI systems can be effectively integrated into clinical workflows and trusted by healthcare professionals.
Furthermore, Celebi [19] addresses the scarcity of studies on content-based image retrieval (CBIR) systems in dermatology, particularly the importance of integrating clinical metadata alongside visual features. This integration is vital for improving AI model accuracy. It is shown by the ISIC Archive’s work to advance dermoscopy image analysis. In parallel, Stafford et al. [20] highlights the promise of deep learning (DL) in diagnosing non-melanoma skin cancer (NMSC). DL can reach specialist-level accuracy in sensitivity and specificity. It also notes challenges. Image perturbations can affect diagnostic accuracy. This is a concern for AI in smartphone-based diagnostics. They lack the sophisticated imaging of dermatoscopes.
Debelee’s review [21] highlights the need for high-quality image datasets and standardized reporting in skin lesion analysis. The use of datasets like HAM10000 and ISIC has boosted research in this area. But future work must make AI systems better at classifying, segmenting, and detecting skin diseases. They must be more precise and robust. Chu [22] further explores the increasing use of deep learning in skin cancer diagnosis, showcasing the effectiveness of hybrid models that combine machine learning and deep learning techniques. Despite promising results, many AI smartphone apps raise concerns. They often lack strong validation, so their accuracy is in doubt. Choi [23] provides a comprehensive analysis of DL algorithms used for diagnosing a range of skin conditions, pointing out that while the median diagnostic accuracy is generally high, the risk of bias and the need for prospective image dataset curation and external validation remain significant challenges.
Hausser [24] highlights the role of explainable AI (XAI) in skin cancer detection. He emphasizes the need to systematically evaluate XAI methods. This would improve the interpretability and reliability of AI-driven diagnoses. XAI shows promise. But, its impact on clinical decision-making is still underexplored. Takiddin [25] and Khattar et al. [26] critically analyzes existing CAD systems and AI for skin lesions. It emphasizes the need for good preprocessing, segmentation, and feature extraction. These studies highlight ongoing challenges in skin lesion analysis. Thick hairs can obstruct accurate segmentation. We must refine these technologies for real-time clinical use.
Finally, Painuli et al. [27] surveys recent ML and DL advances in cancer, including skin cancer. It provides a broader perspective. The review shows that AI can accurately detect cancer. It is a valuable tool for helping medical professionals. It also highlights the need to improve and test these technologies. We must ensure they work well for different patients and in various clinics. This field’s research needs advanced deep learning models and large, diverse datasets to train them effectively. Also, rigorous validation and testing are needed. They ensure the system’s reliability and safety.
The previous review articles focused on systematic literature reviews and scoping reviews, which are distinct methodologies. A systematic literature review typically aims to answer a specific research question by thoroughly synthesizing and critically analyzing existing studies, while a scoping review aims to map the existing literature on a broader topic to identify key concepts, gaps, and evidence types. In contrast, our study employs bibliometric analysis, which focuses on the quantitative assessment of publications, including the analysis of publication trends, citation patterns, and co-authorship networks. This study aims to analyze publication trends in deep learning-based cancer classification. The database was collected through Scopus over a period of 2019–2024. The review covers both quantitative and content analysis. We intend to share insights into comparable research.

2. Material and Methods

2.1. Goal and Research Question

The purpose of this study is to provide a comprehensive overview of the development of research related to the use of deep learning in skin cancer detection through bibliometric analysis. This study aims to identify publication trends, geographical distribution, authors, journals, and the most productive organizations, as well as the most cited articles and countries. In addition, this study also explores frequently used keywords and their changing trends over time. By answering the formulated research questions, this article not only provides insight into the research landscape in this field but also offers strategic directions for future research.
RQ1:
What is the trend in the number of publications on the use of deep learning in skin cancer classification from year to year?
RQ2:
Which countries contribute the most publications on deep learning for skin cancer classification, and how is this distributed?
RQ3:
Who are the most productive authors in publishing articles on the use of deep learning for skin cancer classification?
RQ4:
Which journals publish the most articles on deep learning for skin cancer classification?
RQ5:
Which countries have the most cited publications in research on deep learning for skin cancer classification?
RQ6:
Which organizations or institutions are the most productive in publishing research on deep learning for skin cancer classification?
RQ7:
What are the most cited articles in research on deep learning for skin cancer?
RQ8:
What are the most frequently used keywords in research on deep learning in skin cancer classification, and how have their usage patterns or trends evolved over time?

2.2. Data Collection

All papers included in this study were sourced from the Scopus database using the following search string:
“image* AND (melanoma OR (“skin cancer”) OR (“skin lesion”)) AND (deep OR convolution* OR cnn) AND (classification OR detection OR segmentation OR recognition OR diagnosis) AND (“Sydney Melanoma” OR “International Skin Imaging Collaboration” OR isic* OR “ham*” OR ph* OR isbi* OR dermnet OR dermis OR dermofit OR dermquest OR bcn* OR skinl2 OR med-node OR msk OR uda OR derm7pt OR “pad-ufes-20”)”, which was utilized across all article title, abstract, and keywords. This search resulted in 2175 documents. Publications from the year 2024 were omitted due to the ongoing increase in their number. By the time this search criteria was applied on 8 November 2024.
The publication applied several phases of filtering criteria. The research used only sources from English-language journals and conference proceedings. The types of documents considered were articles, conference papers, and reviews. After applying these criteria, 1697 documents published between 2019 and 2024 were obtained. Additionally, bibliometric data such as author names, titles, publication years, source titles, document types, affiliations, publishers, abstracts, author keywords, and indexed keywords were collected. The data were then exported in a comma-separated values (CSV) format file for analysis. Figure 1 shows the flowchart of the exclusion process.

2.3. Data Exclusion

It became necessary to eliminate some publications in order to obtain data appropriate for this research. In our literature search, we applied several screening criteria to refine the retrieved articles. First, we limited the publication years to 2019–2024 to ensure relevance to recent advancements. Next, we filtered by document type, selecting only articles, conference papers, reviews, letters, and notes. Additionally, we restricted the source type to include only journals and conference proceedings to ensure high-quality sources. Lastly, we included only articles written in English to maintain consistency in our analysis.

2.4. Data Analysis

We analyzed the data using numerous applications, including VOSviewer (version 1.6.18) and Bibliometrix (version 4.3.2). These apps help to visualize the most recent research. Some data are tabulated to provide more detailed information.

3. Results

3.1. RQ1: Publication Trends

Based on the publications filtered out through the exclusion process, the trend in the number of publications has increased over the past 5 years. There were 99 articles published in 2019. Until 2024, the average number of articles increased by 40% to 60% each year. In 2024, the number of articles reach 444 articles. The increasing trend is shown in Figure 2. Most of the articles obtained came from journals with 1098 articles and conference papers with 556 articles, as seen in Table 1.
The growing trend of publications on deep learning for skin cancer is closely linked to the rapid advancements in deep learning methodologies. Numerous innovations in deep learning techniques have transformed the field of medical image analysis, particularly in the detection of skin cancer. Additionally, the enhanced performance of deep learning over traditional machine learning methods plays a key role in this progress. By autonomously learning intricate patterns from image data, deep learning achieves higher accuracy and comes close to human-level performance in identifying early signs of skin cancer, which, in turn, speeds up the diagnostic process and enhances the likelihood of more accurate treatment.

3.2. RQ2: Countries Distribution

India is the country with the most authors on this topic, with 316 articles, followed by China, US, Saudi Arabia, and Pakistan. Table 2 displays data on the countries that produce the highest number of articles related to cancer detection using deep learning. In general, the number of developing countries (India, China, Saudi Arabia, Pakistan, Egypt, Turkey, Bangladesh) and developed countries (US, UK, South Korea, Germany, Canada, Australia, Spain, and Italy) in Table 2 are almost equal. However, the top 5 is dominated by developing countries such as India, China, Saudi Arabia, and Pakistan.
Developing countries tend to publish a lot about skin cancer detection using deep learning for several reasons. First, advancements in technology and better access to resources such as artificial intelligence (AI) allow for the implementation of deep learning to improve their healthcare systems. Second, the shortage of trained medical personnel, especially in remote areas, encourages the use of automated technologies for diagnosis. Additionally, with the increasing prevalence of skin cancer and limited healthcare facilities, technologies like deep learning become an essential solution for early detection. International collaborations and government support also play a role in advancing this research. Thus, developing countries focus on leveraging advanced technology to address the health challenges they face, particularly in skin cancer detection.

3.3. RQ3: The Most Productive Authors

Table 3 shows the top ten most productive authors based on the number of documents they have published, providing an interesting overview of contributions from authors across different countries. The most productive author is Khan, M.A. from Pakistan, who has published 20 documents. In second place is Akram, T. from Pakistan as well, with 12 documents. They are followed by Sharif, M. from Pakistan, Kaur, R. from New Zealand, and Brinker, T.J. from Germany, each having published nine documents.
In terms of geographical distribution, the majority of the top authors come from Pakistan, with four authors representing the country. Germany follows with three authors, while New Zealand, Saudi Arabia, Norway, and China each have one representative. Regarding document citations, the table also indicates that authors like Khan, M.A. and Akram, T. have a range of document citations, reflecting their significant influence in the research areas they contribute to. The documents they have produced seem to have a notable impact on the development of knowledge as their articles are frequently referenced in other studies.
Country-specific trends are also interesting to note. The dominance of Pakistan in this list may suggest a strong research potential in the field. Germany also shows significant contributions, likely influenced by advanced research infrastructure and strong international collaborations. Meanwhile, other countries, such as New Zealand, Saudi Arabia, Norway, and China, though having fewer representatives, still show notable contributions.
Overall, this analysis shows a concentration of research productivity in Pakistan and Germany, with other countries also making significant contributions. The variation in the number of published documents and the range of citations reflects the depth and quality of research activities in the represented countries.

3.4. RQ4: The Most Productive Journals

Table 4 shows the top-15 journal sources most often referred to by authors. IEEE access is ranked first in publishing journal articles. These journals are indexed in reputable international databases, such as IEEE, Sciencedirect, Springer, Multidisciplinary Digital Publishing Institute (MDPI), John Wiley, Tech Science Press, and Frontiers Media. Several journals are the author’s targets in publishing articles because they are very relevant to the topic of cancer detection, namely, Compute in Biology and Medicine, Biomedical Signal Processing and Control, Computer Methods and Programs in Biomedicine, Cancers, Medical Image Analysis, IEEE Journal of Biomedical and Health Informatics, and Frontiers in Medicine.
Key findings from the Table 4 reveal several important trends, such as the dominance of open-access journals published by MDPI, such as Diagnostics, Sensors, and Cancers, reflecting the increasing openness of research access for free. Most of the listed journals primarily focus on biomedical, computer science, and engineering fields, highlighting the strong interdisciplinary nature of this research, involving collaboration across various disciplines. Additionally, the significant role of the engineering community, seen in the presence of several IEEE journals, underscores how technology and innovation in the engineering field contribute to the development of skin cancer detection technologies. Overall, this analysis reflects global trends in research openness and cross-disciplinary collaboration in the effort to detect skin cancer using deep learning.

3.5. RQ5: The Most Cited Countries

The top-10 most frequently cited countries are depicted in Figure 3. China has the highest number of citations (3995), followed by Pakistan (1480), India (1390), the United States (1133), and Germany (1087). Asian countries, particularly China, Pakistan, and India, dominate the list of countries with the most citations. This indicates that these countries have made significant contributions to the field of study being analyzed. Western countries such as the United States, Germany, and the United Kingdom also appear on the list, but their positions are not as dominant as the Asian countries. This suggests that while Western countries remain important players, the influence of Asian countries is increasing. There is a significant disparity between the citation counts of the countries at the top and those at the bottom. China, in particular, has a much higher citation count compared to other countries, demonstrating its substantial influence in this field.
Countries with high citation counts tend to become hubs of research and development in the field. This means that educational and research institutions in these countries produce highly influential works that are globally recognized. A high number of citations is often associated with the quality of publications. In other words, research from countries with high citation counts is likely to be of better quality and more relevant to the advancement of knowledge. Citation counts can also reflect the level of international collaboration. Countries with high citation counts may have more collaborations with researchers from other nations, which can enrich knowledge and accelerate progress in the field. Factors such as research culture, government policies supporting research, and the availability of resources can also influence citation counts.

3.6. RQ6: The Most Productive Organizations

Based on Table 5, the Dept. of Dermatology dominates as the top five affiliates in publishing articles on skin cancer detection using deep learning [24,62,63,64,65,66,67]. Four of the top five organizations are from Germany, indicating that Germany has very strong and active research centers in developing skin cancer detection technologies using deep learning. This could be attributed to significant investment in medical research, strong research infrastructure, or robust collaboration between academic institutions and industry in Germany. Most of the organizations listed have dermatology departments, highlighting that this research is closely related to clinical practice and aims to directly improve the diagnosis and treatment of skin cancer. Both universities and research centers like the German Cancer Research Center (DKFZ) have made significant contributions. This suggests that the research involves collaboration between academics and practitioners, which is crucial for producing innovative and real-world relevant research. While dominated by German institutions, the presence of Chitkara University from India indicates that developing countries are also becoming increasingly active in this research. This is a positive sign that deep learning technology is being applied globally to address complex health issues like skin cancer.

3.7. RQ7: The Most Cited Articles

Table 6 shows the top 10 most frequently cited journal articles, which are dominated by articles indexed in IEEE. Most of these journals are for the field of image in health, such as IEEE transactions on medical imaging, European Journal of Cancer, Computer Methods and Programs in Biomedicine, IEEE Journal of Biomedical and Health Informatics, and Medical Image Analysis. The discussion of these ten articles can be grouped into several parts, as follows.

3.7.1. Advancements in Semi-Supervised Learning for Medical Image Segmentation

Xiaomeng Li [82] highlights how semi-supervised methods outperform traditional supervised approaches by leveraging unlabeled data and improving performance on segmentation tasks. This is reinforced by the transformation-consistent scheme (TCSM), which enhances regularization and leads to a 4.07% improvement in the Jaccard index (JA) and 3.47% in the Dice coefficient (DI). The success of these methods is evident not only in skin lesion segmentation but also in OD segmentation from retinal fundus images and liver segmentation from CT volumes, demonstrating broad applicability in medical imaging.

3.7.2. State-of-the-Art Performance in Segmentation and Classification

Mutual Bootstrapping Deep Convolutional Neural Networks (MB-DCNN) [85] model stands out in skin lesion segmentation with a Jaccard index of 80.4% to 89.4% and substantial improvements over state-of-the-art models in both segmentation and classification. This model also incorporates multi-task learning and a coarse lesion mask, which enhances lesion localization. It outperforms fully supervised models and state-of-the-art classification methods with significant performance gains (up to 7.5% in classification and 2.9% in segmentation), particularly on ISIC-2017 and PH2 datasets.
Similarly, the FAT-Net [87] model utilizes feature-adaptive transformers and memory-efficient strategies to achieve superior accuracy and inference speed in skin lesion segmentation. By capturing both local and global context through its architecture, FAT-Net excels in segmentation tasks. These innovations showcase how modern architectures leverage attention mechanisms to improve receptive fields and performance in medical image analysis.
Another noteworthy development is the application of deep-learning algorithms in classification, such as a CNN that outperforms 136 dermatologists in melanoma image classification [66], achieving a sensitivity of 87.5% and specificity of 86.5%. Such results suggest that AI can play a pivotal role in assisting or even outperforming human experts in specific diagnostic tasks, reinforcing the value of AI-driven solutions in healthcare.
The trend continues with a proposed classification method by Khalid et al. [83] that excels across datasets like DermIS-DermQuest and MED-NODE, achieving accuracy rates of 96.86% and 97.70% respectively. The method consistently delivers high classification performance without the need for image enhancement, achieving 95.91% accuracy on the ISIC dataset.
The use of balanced and augmented data significantly improves deep learning classification networks [84]. For instance, an integrated diagnostic framework enhanced the classification performance of Inception-ResNet-v2 by 2.72% and 4.71% in F1-score for benign and malignant cases, respectively, on the ISIC 2016 test dataset. Similarly, balanced and segmented training data has led to a 6.38% increase in the F1-score compared to imbalanced data, underscoring the importance of data quality in model performance.

3.7.3. Ensemble and Attention-Based Methods Enhancing Accuracy

The ARL-CNN [80] model achieves superior classification by focusing on semantically meaningful regions of skin lesions, outperforming models like ResNet on the ISIC 2017 dataset. The success of attention learning in these models suggests that focusing on critical regions significantly boosts performance in challenging classification tasks.
Ensemble methods, explored in [88], also prove effective in segmentation, improving key metrics like the Jaccard Similarity Index (JSI), Dice Similarity Coefficient, and Matthew Correlation Coefficient (MCC). These methods demonstrate that by combining models and optimizing loss functions (e.g., using momentum and cross-entropy), segmentation performance can be further elevated.

3.7.4. Impact Beyond Skin Lesions

Beyond skin lesion segmentation, the innovations discussed in these sources extend to radiology, digital pathology, and genomic data integration. Techniques like the CA-Net [81] architecture emphasize the potential for explainability and accuracy, integrating spatial, channel, and scale attention modules, which can be applied to multi-organ segmentation and other imaging tasks.
The field is moving towards integrating imaging with clinical and genomic data, a trend emphasized by the Society for Imaging Informatics in Medicine [86]. This cross-disciplinary approach will lead to improvements in precision medicine, leveraging advancements in segmentation algorithms and the use of semi-supervised learning for reducing annotation burdens.
The combination of semi-supervised methods, attention mechanisms, ensemble techniques, and transformer-based architectures has driven recent state-of-the-art performance in medical image segmentation and classification. Models like MB-DCNN, FAT-Net, and ARL-CNN demonstrate the power of integrating both labeled and unlabeled data, advanced regularization schemes, and context-aware architectures. These innovations have resulted in measurable improvements in accuracy, specificity, and sensitivity, significantly advancing the capabilities of deep learning in medical imaging tasks across various domains.

3.8. RQ8: Keywords and Research Trends

The overlay visualization based on the author’s keywords is displayed in Figure 4. Prominent terms like deep learning, skin cancer, and medical image segmentation imply that the most commonly used keywords by authors in a certain field of research are likely connected to machine learning and medical image processing. The size of the nodes indicates how frequently a given keyword is used, and each node represents a different keyword. Bigger nodes such as skin cancer and deep learning imply that these are important areas of study.
Co-occurrence relationships are represented by the edges (lines joining nodes), which display the frequency with which two keywords occur together in the same articles. Topics are discussed together more frequently the more related the nodes are. For instance, there is a high correlation between deep learning, skin cancer, dermoscopic images, and medical image segmentation.
The gradient of colors, which goes from dark blue to yellow, shows how these keywords have changed over time. Lighter nodes (yellow/green) indicate more recent terms that have developed in the field, whereas darker nodes (blue/purple) represent older keywords that first appeared in study earlier. For example, monkeypox and melanoma classification [89] are relatively new entries, as evidenced by their lighter colors.
Figure 5 shows a thematic map, which categorizes research themes based on their development degree (density) and relevance degree (centrality). This type of map is commonly used in bibliometric analysis to visualize the structure and evolution of research topics in a given field.
Keywords are only in two quadrants: Niche Themes and Basic Themes. The upper left quadrant, called Niche Themes, contains topics that are already quite developed but have lower relevance than the main theme. For example, topics such as dilated convolution [57,90,91,92], semi-supervised learning [93,94], and basal cell carcinoma [89,95] are in this area, indicating that they are more specific and focused themes but are not very related to broader research. The lower right quadrant, called Basic Themes, includes topics that are very relevant but not very developed. These themes, such as support vector machine [96,97], skin disease, vgg16 [98,99], cnn, medical image segmentation, and transfer learning [100,101,102], are the foundation of research in the field and are still evolving.
In the thematic map, the size of each circle represents the intensity of research activity within each theme, which can be interpreted as the frequency of occurrence or prominence in the analyzed literature. Themes with larger circles, like “Deep learning, CNN, skin cancer, melanoma”, represent well-established and extensively researched areas categorized as Basic Themes. In contrast, smaller circles, such as “Dilated convolution” and “Semi-supervised learning”, highlight more specialized or emerging research topics.
Based on the presented thematic map, several promising research directions in skin cancer detection using deep learning can be identified. First, the utilization of unlabeled data through semi-supervised learning approaches opens new opportunities to train deep learning models with fewer labeled data, which are often expensive and difficult to obtain. Second, the integration of multimodal data, such as combining clinical image data with genetic information, can enhance diagnostic accuracy and provide a more comprehensive understanding of disease progression. Third, although melanoma is currently the primary focus of research, expanding the scope to include other types of skin cancer, such as basal cell carcinoma and squamous cell carcinoma, would be highly beneficial for improving early detection across a wider spectrum. Finally, close collaboration with clinicians is crucial to ensure that research findings can be translated into effective clinical solutions that are adoptable in everyday practice.

4. Conclusions

A comprehensive search of the Scopus database was conducted to identify relevant publications, which were subsequently analyzed through both quantitative methods and content analysis. Over the past five years, there has been a notable and consistent increase in the volume of published articles, reflecting the field’s ongoing expansion and its potential future impact. Contributions to the literature are made not only by scholars in computer science but also by professionals in the medical field. The leading countries in terms of publication output are China, India, and the United States. Prominent research areas that warrant further exploration include dilated convolution, attention mechanisms, and medical image segmentation. This study concludes that deep learning techniques have shown promising results in skin cancer detection, and the field is rapidly growing. However, there are still challenges and limitations that need to be addressed, such as the need for high-quality image datasets and standardized reporting.
A potential direction for future research could involve investigating key areas such as enhancing the generalization capabilities of deep learning models and ensuring the privacy protection of medical data, which are essential to guarantee the safe and effective deployment of AI in healthcare. A deeper exploration of specific methodologies, such as comparing the performance of different deep learning models (e.g., CNNs, GANs, and autoencoders), would be valuable to identify the most effective approaches for skin cancer classification. Additionally, analyzing the quality and clinical relevance of publications could bridge the gap between experimental research and real-world applications. Expanding the scope to include an evaluation of global disparities between developed and developing nations in terms of research output and access to technology could uncover important challenges and opportunities.
Moreover, it is crucial to analyze the explainability aspects of deep learning models, as interpretability remains a significant challenge in deploying these systems in clinical practice. Future studies could focus on integrating explainable AI (XAI) techniques to ensure that the predictions made by deep learning models are transparent and interpretable by medical professionals. Such efforts could foster trust and facilitate adoption in real-world clinical settings. Additionally, research into the real-time implementation of deep learning applications is essential to address latency and computational efficiency challenges, enabling these technologies to be used effectively in dynamic, high-pressure clinical environments.
To further enrich the understanding of this field, conducting a scoping review could help map the broader landscape of research, identifying gaps and underexplored areas that warrant further investigation. Similarly, a conceptual review could deepen the theoretical understanding of the application of deep learning in skin cancer detection, exploring frameworks and models that drive advancements. By addressing these aspects, the research can contribute to the development of more innovative, interpretable, and real-time applicable solutions for skin cancer detection, ultimately improving patient outcomes and advancing the field.

Author Contributions

Conceptualization, C.S.; Data curation, A.S. and J.Z.; Formal analysis, A.W.; Investigation, C.S., D.W.U. and A.W.; Methodology, C.S.; Resources, A.S.; Software, A.S., C.P. and J.Z.; Supervision, A.W. and N.Z.M.S.; Validation, C.S., D.W.U., I.N.D., C.P. and A.W.; Writing—original draft, C.S.; Writing—review and editing, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by DRTPM DIKTI grant number 108/E5/PG.02.00.PL/2024.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhang, X.; Zhou, S.; Li, B.; Wang, Y.; Lu, K.; Liu, W.; Wang, Z. Automatic segmentation of pericardial adipose tissue from cardiac MR images via semi-supervised method with difference-guided consistency. Med. Phys. 2025, 52, 1679–1692. [Google Scholar] [CrossRef] [PubMed]
  2. Sun, Y.; Wang, L.; Li, G.; Lin, W.; Wang, L. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat. Biomed. Eng. 2024. [Google Scholar] [CrossRef] [PubMed]
  3. Wang, L.; Sun, Y.; Seidlitz, J.; Bethlehem, R.A.I.; Alexander-Bloch, A.; Dorfschmidt, L.; Li, G.; Elison, J.T.; Lin, W.; Wang, L. A lifespan-generalizable skull-stripping model for magnetic resonance images that leverages prior knowledge from brain atlases. Nat. Biomed. Eng. 2025. [Google Scholar] [CrossRef] [PubMed]
  4. Mahmutoglu, M.A.; Preetha, C.J.; Meredig, H.; Tonn, J.C.; Weller, M.; Wick, W.; Bendszus, M.; Brugnara, G.; Vollmuth, P. Deep Learning–based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts. Radiol. Artif. Intell. 2024, 6, e230095. [Google Scholar] [CrossRef]
  5. Sharma, A.K.; Tiwari, S.; Aggarwal, G.; Goenka, N.; Kumar, A.; Chakrabarti, P.; Chakrabarti, T.; Gono, R.; Leonowicz, Z.; Jasiński, M. Dermatologist-Level Classification of Skin Cancer Using Cascaded Ensembling of Convolutional Neural Network and Handcrafted Features Based Deep Neural Network. IEEE Access 2022, 10, 17920–17932. [Google Scholar] [CrossRef]
  6. Brinker, T.J.; Hekler, A.; Utikal, J.S.; Grabe, N.; Schadendorf, D.; Klode, J.; Berking, C.; Steeb, T.; Enk, A.H.; Von Kalle, C. Skin Cancer Classification Using Convolutional Neural Networks: Systematic Review. J. Med. Internet Res. 2018, 20, e11936. [Google Scholar] [CrossRef]
  7. Jinnai, S.; Yamazaki, N.; Hirano, Y.; Sugawara, Y.; Ohe, Y.; Hamamoto, R. The Development of a Skin Cancer Classification System for Pigmented Skin Lesions Using Deep Learning. Biomolecules 2020, 10, 1123. [Google Scholar] [CrossRef]
  8. Ravi, V. Attention Cost-Sensitive Deep Learning-Based Approach for Skin Cancer Detection and Classification. Cancers 2022, 14, 5872. [Google Scholar] [CrossRef]
  9. Obayya, M.; Arasi, M.A.; Almalki, N.S.; Alotaibi, S.S.; Al Sadig, M.; Sayed, A. Internet of Things-Assisted Smart Skin Cancer Detection Using Metaheuristics with Deep Learning Model. Cancers 2023, 15, 5016. [Google Scholar] [CrossRef]
  10. Gouda, W.; Sama, N.U.; Al-Waakid, G.; Humayun, M.; Jhanjhi, N.Z. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare 2022, 10, 1183. [Google Scholar] [CrossRef]
  11. Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An Efficient Deep Learning-Based Skin Cancer Classifier for an Imbalanced Dataset. Diagnostics 2022, 12, 2115. [Google Scholar] [CrossRef]
  12. Chaturvedi, S.S.; Tembhurne, J.V.; Diwan, T. A multi-class skin Cancer classification using deep convolutional neural networks. Multimed. Tools Appl. 2020, 79, 28477–28498. [Google Scholar] [CrossRef]
  13. Gilani, S.Q.; Umair, M.; Naqvi, M.; Marques, O.; Kim, H.C. Adversarial Training Based Domain Adaptation of Skin Cancer Images. Life 2024, 14, 1009. [Google Scholar] [CrossRef] [PubMed]
  14. Tembhurne, J.V.; Hebbar, N.; Patil, H.Y.; Diwan, T. Skin cancer detection using ensemble of machine learning and deep learning techniques. Multimed. Tools Appl. 2023, 82, 27501–27524. [Google Scholar] [CrossRef]
  15. Mazhar, T.; Haq, I.; Ditta, A.; Mohsan, S.A.H.; Rehman, F.; Zafar, I.; Gansau, J.A.; Goh, L.P.W. The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer. Healthcare 2023, 11, 415. [Google Scholar] [CrossRef]
  16. Furriel, B.C.R.S.; Oliveira, B.D.; Prôa, R.; Paiva, J.Q.; Loureiro, R.M.; Calixto, W.P.; Reis, M.R.C.; Giavina-Bianchi, M. Artificial intelligence for skin cancer detection and classification for clinical environment: A systematic review. Front. Med. 2024, 10, 1305954. [Google Scholar] [CrossRef]
  17. Brancaccio, G.; Balato, A.; Malvehy, J.; Puig, S.; Argenziano, G.; Kittler, H. Artificial Intelligence in Skin Cancer Diagnosis: A Reality Check. J. Investig. Dermatol. 2024, 144, 492–499. [Google Scholar] [CrossRef]
  18. Wei, M.L.; Tada, M.; So, A.; Torres, R. Artificial intelligence and skin cancer. Front. Med. 2024, 11, 1331895. [Google Scholar] [CrossRef]
  19. Celebi, M.E.; Codella, N.; Halpern, A. Dermoscopy Image Analysis: Overview and Future Directions. IEEE J. Biomed. Health Inform. 2019, 23, 474–478. [Google Scholar] [CrossRef]
  20. Stafford, H.; Buell, J.; Chiang, E.; Ramesh, U.; Migden, M.; Nagarajan, P.; Amit, M.; Yaniv, D. Non-Melanoma Skin Cancer Detection in the Age of Advanced Technology: A Review. Cancers 2023, 15, 3094. [Google Scholar] [CrossRef]
  21. Debelee, T.G. Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review. Diagnostics 2023, 13, 3147. [Google Scholar] [CrossRef] [PubMed]
  22. Chu, Y.S.; An, H.G.; Oh, B.H.; Yang, S. Artificial Intelligence in Cutaneous Oncology. Front. Med. 2020, 7, 318. [Google Scholar] [CrossRef] [PubMed]
  23. Choy, S.P.; Kim, B.J.; Paolino, A.; Tan, W.R.; Lim, S.M.L.; Seo, J.; Tan, S.P.; Francis, L.; Tsakok, T.; Simpson, M.; et al. Systematic review of deep learning image analyses for the diagnosis and monitoring of skin disease. NPJ Digit. Med. 2023, 6, 180. [Google Scholar] [CrossRef] [PubMed]
  24. Hauser, K.; Kurz, A.; Haggenmüller, S.; Maron, R.C.; Von Kalle, C.; Utikal, J.S.; Meier, F.; Hobelsberger, S.; Gellrich, F.F.; Sergon, M.; et al. Explainable artificial intelligence in skin cancer recognition: A systematic review. Eur. J. Cancer 2022, 167, 54–69. [Google Scholar] [CrossRef]
  25. Takiddin, A.; Schneider, J.; Yang, Y.; Abd-Alrazaq, A.; Househ, M. Artificial Intelligence for Skin Cancer Detection: Scoping Review. J. Med. Internet Res. 2021, 23, e22934. [Google Scholar] [CrossRef]
  26. Khattar, S.; Kaur, R. Computer assisted diagnosis of skin cancer: A survey and future recommendations. Comput. Electr. Eng. 2022, 104, 108431. [Google Scholar] [CrossRef]
  27. Painuli, D.; Bhardwaj, S.; köse, U. Recent advancement in cancer diagnosis using machine learning and deep learning techniques: A comprehensive review. Comput. Biol. Med. 2022, 146, 105580. [Google Scholar] [CrossRef]
  28. Attique Khan, M.; Sharif, M.; Akram, T.; Kadry, S.; Hsu, C.H. A two-stream deep neural network-based intelligent system for complex skin cancer types classification. Int. J. Intell. Syst. 2022, 37, 10621–10649. [Google Scholar] [CrossRef]
  29. Zahoor, S.; Lali, I.U.; Khan, M.A.; Javed, K.; Mehmood, W. Breast cancer detection and classification using traditional computer vision techniques: A comprehensive review. Curr. Med. Imaging 2020, 16, 1187–1200. [Google Scholar] [CrossRef]
  30. Malik, S.; Akram, T.; Awais, M.; Khan, M.A.; Hadjouni, M.; Elmannai, H.; Alasiry, A.; Marzougui, M.; Tariq, U. An Improved Skin Lesion Boundary Estimation for Enhanced-Intensity Images Using Hybrid Metaheuristics. Diagnostics 2023, 13, 1285. [Google Scholar] [CrossRef]
  31. Saba, T.; Khan, M.A.; Rehman, A.; Marie-Sainte, S.L. Region Extraction and Classification of Skin Cancer: A Heterogeneous framework of Deep CNN Features Fusion and Reduction. J. Med. Syst. 2019, 43, 289. [Google Scholar] [CrossRef] [PubMed]
  32. Bibi, S.; Khan, M.A.; Shah, J.H.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. MSRNet: Multiclass Skin Lesion Recognition Using Additional Residual Block Based Fine-Tuned Deep Models Information Fusion and Best Feature Selection. Diagnostics 2023, 13, 63. [Google Scholar] [CrossRef] [PubMed]
  33. Arshad, M.; Khan, M.A.; Tariq, U.; Armghan, A.; Alenezi, F.; Younus Javed, M.; Aslam, S.M.; Kadry, S. A Computer-Aided Diagnosis System Using Deep Learning for Multiclass Skin Lesion Classification. Comput. Intell. Neurosci. 2021, 2021, 9619079. [Google Scholar] [CrossRef] [PubMed]
  34. Nawaz, M.; Nazir, T.; Khan, M.A.; Alhaisoni, M.; Kim, J.Y.; Nam, Y. MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy K -Means Clustering. Comput. Math. Methods Med. 2022, 2022, 7502504. [Google Scholar] [CrossRef]
  35. Nawaz, M.; Nazir, T.; Masood, M.; Ali, F.; Khan, M.A.; Tariq, U.; Sahar, N.; Damaševičius, R. Melanoma segmentation: A framework of improved DenseNet77 and UNET convolutional neural network. Int. J. Imaging Syst. Technol. 2022, 32, 2137–2153. [Google Scholar] [CrossRef]
  36. Hussain, M.; Khan, M.A.; Damaševičius, R.; Alasiry, A.; Marzougui, M.; Alhaisoni, M.; Masood, A. SkinNet-INIO: Multiclass Skin Lesion Localization and Classification Using Fusion-Assisted Deep Neural Networks and Improved Nature-Inspired Optimization Algorithm. Diagnostics 2023, 13, 2869. [Google Scholar] [CrossRef]
  37. Ahmad, N.; Shah, J.H.; Khan, M.A.; Baili, J.; Ansari, G.J.; Tariq, U.; Kim, Y.J.; Cha, J.H. A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI. Front. Oncol. 2023, 13, 1151257. [Google Scholar] [CrossRef]
  38. Iqbal, A.; Sharif, M.; Khan, M.A.; Nisar, W.; Alhaisoni, M. FF-UNet: A U-Shaped Deep Convolutional Neural Network for Multimodal Biomedical Image Segmentation. Cogn. Comput. 2022, 14, 1287–1302. [Google Scholar] [CrossRef]
  39. Khan, M.A.; Muhammad, K.; Sharif, M.; Akram, T.; Albuquerque, V.H.C.D. Multi-Class Skin Lesion Detection and Classification via Teledermatology. IEEE J. Biomed. Health Inform. 2021, 25, 4267–4275. [Google Scholar] [CrossRef]
  40. Khan, M.A.; Sharif, M.; Akram, T.; Damaševičius, R.; Maskeliūnas, R. Skin lesion segmentation and multiclass classification using deep learning features and improved moth flame optimization. Diagnostics 2021, 11, 811. [Google Scholar] [CrossRef]
  41. Khan, M.A.; Akram, T.; Zhang, Y.D.; Sharif, M. Attributes based skin lesion detection and recognition: A mask RCNN and transfer learning-based deep learning framework. Pattern Recognit. Lett. 2021, 143, 58–66. [Google Scholar] [CrossRef]
  42. Khan, M.A.; Akram, T.; Sharif, M.; Javed, K.; Rashid, M.; Bukhari, S.A.C. An integrated framework of skin lesion detection and recognition through saliency method and optimal deep neural network features selection. Neural Comput. Appl. 2020, 32, 15929–15948. [Google Scholar] [CrossRef]
  43. Khan, M.A.; Sharif, M.; Akram, T.; Bukhari, S.A.C.; Nayak, R.S. Developed Newton-Raphson based deep features selection framework for skin lesion recognition. Pattern Recognit. Lett. 2020, 129, 293–303. [Google Scholar] [CrossRef]
  44. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef]
  45. Afza, F.; Sharif, M.; Mittal, M.; Khan, M.A.; Jude Hemanth, D. A hierarchical three-step superpixels and deep learning framework for skin lesion classification. Methods 2022, 202, 88–102. [Google Scholar] [CrossRef]
  46. Khan, M.A.; Akram, T.; Zhang, Y.D.; Alhaisoni, M.; Al Hejaili, A.; Shaban, K.A.; Tariq, U.; Zayyan, M.H. SkinNet-ENDO: Multiclass skin lesion recognition using deep neural network and Entropy-Normal distribution optimization algorithm with ELM. Int. J. Imaging Syst. Technol. 2023, 33, 1275–1292. [Google Scholar] [CrossRef]
  47. Khan, M.A.; Akram, T.; Sharif, M.; Kadry, S.; Nam, Y. Computer Decision Support System for Skin Cancer Localization and Classification. Comput. Mater. Contin. 2021, 68, 1041–1064. [Google Scholar] [CrossRef]
  48. Malik, S.; Akram, T.; Ashraf, I.; Rafiullah, M.; Ullah, M.; Tanveer, J. A Hybrid Preprocessor DE-ABC for Efficient Skin-Lesion Segmentation with Improved Contrast. Diagnostics 2022, 12, 2625. [Google Scholar] [CrossRef]
  49. Malik, S.; Islam, S.M.R.; Akram, T.; Naqvi, S.R.; Alghamdi, N.S.; Baryannis, G. A novel hybrid meta-heuristic contrast stretching technique for improved skin lesion segmentation. Comput. Biol. Med. 2022, 151, 106222. [Google Scholar] [CrossRef]
  50. Akram, T.; Lodhi, H.M.J.; Naqvi, S.R.; Naeem, S.; Alhaisoni, M.; Ali, M.; Haider, S.A.; Qadri, N.N. A multilevel features selection framework for skin lesion classification. Hum.-Centric Comput. Inf. Sci. 2020, 10, 12. [Google Scholar] [CrossRef]
  51. Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
  52. Kaur, R.; Gholamhosseini, H.; Sinha, R. Synthetic Images Generation Using Conditional Generative Adversarial Network for Skin Cancer Classification. In Proceedings of the TENCON 2021—2021 IEEE Region 10 Conference (TENCON), Auckland, New Zealand, 7–10 December 2021; pp. 381–386. [Google Scholar] [CrossRef]
  53. Ali, A.A.; Taha, R.E.; Kaur, R.; Afifi, S.M. Multi-Class Classification of Melanoma on an Edge Device. In Proceedings of the 2023 International Conference on Microelectronics (ICM), Abu Dhabi, United Arab Emirates, 17–20 December 2023; pp. 46–51. [Google Scholar] [CrossRef]
  54. Dawod, M.I.; Taha, R.; Kaur, R.; Afifi, S.M. Real-time Classification of Skin Cancer on an Edge Device. In Proceedings of the 2023 2nd International Conference on Smart Cities 4.0, Cairo, Egypt, 22–24 October 2023; pp. 184–191. [Google Scholar] [CrossRef]
  55. Kaur, R.; Gholamhosseini, H.; Sinha, R.; Lindén, M. Melanoma Classification Using a Novel Deep Convolutional Neural Network with Dermoscopic Images. Sensors 2022, 22, 1134. [Google Scholar] [CrossRef] [PubMed]
  56. Kaur, R.; GholamHosseini, H.; Sinha, R. Hairlines removal and low contrast enhancement of melanoma skin images using convolutional neural network with aggregation of contextual information. Biomed. Signal Process. Control 2022, 76, 103653. [Google Scholar] [CrossRef]
  57. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Automatic lesion segmentation using atrous convolutional deep neural networks in dermoscopic skin cancer images. BMC Med. Imaging 2022, 22, 103. [Google Scholar] [CrossRef]
  58. Kaur, R.; Hosseini, H.G.; Sinha, R. Lesion Border Detection of Skin Cancer Images Using Deep Fully Convolutional Neural Network with Customized Weights. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Virtual, 1–5 November 2021; pp. 3035–3038. [Google Scholar] [CrossRef]
  59. Kaur, R.; GholamHosseini, H.; Sinha, R. Skin lesion segmentation using an improved framework of encoder-decoder based convolutional neural network. Int. J. Imaging Syst. Technol. 2022, 32, 1143–1158. [Google Scholar] [CrossRef]
  60. Kaur, R.; Gholamhosseini, H. Analyzing the Impact of Image Denoising and Segmentation on Melanoma Classification Using Convolutional Neural Networks. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023. [Google Scholar] [CrossRef]
  61. Fogelberg, K.; Chamarthi, S.; Maron, R.C.; Niebling, J.; Brinker, T.J. Domain shifts in dermoscopic skin cancer datasets: Evaluation of essential limitations for clinical translation. New Biotechnol. 2023, 76, 106–117. [Google Scholar] [CrossRef]
  62. Schneider, L.; Wies, C.; Krieghoff-Henning, E.I.; Bucher, T.C.; Utikal, J.S.; Schadendorf, D.; Brinker, T.J. Multimodal integration of image, epigenetic and clinical data to predict BRAF mutation status in melanoma. Eur. J. Cancer 2023, 183, 131–138. [Google Scholar] [CrossRef]
  63. Brinker, T.J.; Hekler, A.; Enk, A.H.; Von Kalle, C. Enhanced classifier training to improve precision of a convolutional neural network to identify images of skin lesions. PLoS ONE 2019, 14, e0218713. [Google Scholar] [CrossRef]
  64. Maron, R.C.; Hekler, A.; Krieghoff-Henning, E.; Schmitt, M.; Schlager, J.G.; Utikal, J.S.; Brinker, T.J. Reducing the Impact of Confounding Factors on Skin Cancer Classification via Image Segmentation: Technical Model Study. J. Med. Internet Res. 2021, 23, e21695. [Google Scholar] [CrossRef]
  65. Brinker, T.J.; Hekler, A.; Hauschild, A.; Berking, C.; Schilling, B.; Enk, A.H.; Haferkamp, S.; Karoglan, A.; Von Kalle, C.; Weichenthal, M.; et al. Comparing artificial intelligence algorithms to 157 German dermatologists: The melanoma classification benchmark. Eur. J. Cancer 2019, 111, 30–37. [Google Scholar] [CrossRef]
  66. Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Fröhling, S.; et al. A convolutional neural network trained with dermoscopic images performed on par with 145 dermatologists in a clinical melanoma image classification task. Eur. J. Cancer 2019, 111, 148–154. [Google Scholar] [CrossRef]
  67. Brinker, T.J.; Hekler, A.; Enk, A.H.; Klode, J.; Hauschild, A.; Berking, C.; Schilling, B.; Haferkamp, S.; Schadendorf, D.; Holland-Letz, T.; et al. Deep learning outperformed 136 of 157 dermatologists in a head-to-head dermoscopic melanoma image classification task. Eur. J. Cancer 2019, 113, 47–54. [Google Scholar] [CrossRef]
  68. Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin lesion segmentation and classification using conventional and deep learning based framework. Comput. Mater. Contin. 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
  69. Kalyani, K.; Althubiti, S.A.; Ahmed, M.A.; Lydia, E.L.; Kadry, S.; Han, N.; Nam, Y. Arithmetic Optimization with Ensemble Deep Transfer Learning Based Melanoma Classification. Comput. Mater. Contin. 2023, 75, 149–164. [Google Scholar] [CrossRef]
  70. Kadry, S.; Taniar, D.; Damasevicius, R.; Rajinikanth, V.; Lawal, I.A. Extraction of Abnormal Skin Lesion from Dermoscopy Image using VGG-SegNet. In Proceedings of the 2021 Seventh International Conference on Bio Signals, Images, and Instrumentation (ICBSII), Chennai, India, 25–27 March 2021. [Google Scholar] [CrossRef]
  71. Cheng, X.; Kadry, S.; Meqdad, M.N.; Crespo, R.G. CNN supported framework for automatic extraction and evaluation of dermoscopy images. J. Supercomput. 2022, 78, 17114–17131. [Google Scholar] [CrossRef]
  72. Jiang, Y.; Dong, J.; Cheng, T.; Zhang, Y.; Lin, X.; Liang, J. iU-Net: A hybrid structured network with a novel feature fusion approach for medical image segmentation. Biodata Min. 2023, 16, 5. [Google Scholar] [CrossRef]
  73. Zhang, Z.; Jiang, Y.; Qiao, H.; Wang, M.; Yan, W.; Chen, J. SIL-Net: A Semi-Isotropic L-shaped network for dermoscopic image segmentation. Comput. Biol. Med. 2022, 150, 106146. [Google Scholar] [CrossRef]
  74. Jiang, Y.; Qiao, H.; Zhang, Z.; Wang, M.; Yan, W.; Chen, J. MDSC-Net: A multi-scale depthwise separable convolutional neural network for skin lesion segmentation. IET Image Process. 2023, 17, 3713–3727. [Google Scholar] [CrossRef]
  75. Jiang, Y.; Cao, S.; Tao, S.; Zhang, H. Skin Lesion Segmentation Based on Multi-Scale Attention Convolutional Neural Network. IEEE Access 2020, 8, 122811–122825. [Google Scholar] [CrossRef]
  76. Jiang, Y.; Dong, J.; Zhang, Y.; Cheng, T.; Lin, X.; Liang, J. PCF-Net: Position and context information fusion attention convolutional neural network for skin lesion segmentation. Heliyon 2023, 9, 13942. [Google Scholar] [CrossRef]
  77. Jiang, Y.; Cheng, T.; Dong, J.; Liang, J.; Zhang, Y.; Lin, X.; Yao, H. Dermoscopic image segmentation based on Pyramid Residual Attention Module. PLoS ONE 2022, 17, e0267380. [Google Scholar] [CrossRef]
  78. Jiang, Y.; Wang, M.; Zhang, Z.; Qiao, H.; Yan, W.; Chen, J. CTDS-Net:CNN-Transformer Fusion Network for Dermoscopic Image Segmentation. In Proceedings of the 2023 5th International Conference on Robotics and Computer Vision (ICRCV), Nanjing, China, 15–17 September 2023; pp. 141–150. [Google Scholar] [CrossRef]
  79. Maron, R.C.; Haggenmüller, S.; von Kalle, C.; Utikal, J.S.; Meier, F.; Gellrich, F.F.; Hauschild, A.; French, L.E.; Schlaak, M.; Ghoreschi, K.; et al. Robustness of convolutional neural networks in recognition of pigmented skin lesions. Eur. J. Cancer 2021, 145, 81–91. [Google Scholar] [CrossRef]
  80. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef]
  81. Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; Zhang, S. CA-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation. IEEE Trans. Med. Imaging 2021, 40, 699–711. [Google Scholar] [CrossRef]
  82. Li, X.; Yu, L.; Chen, H.; Fu, C.W.; Xing, L.; Heng, P.A. Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 523–534. [Google Scholar] [CrossRef]
  83. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef]
  84. Al-masni, M.A.; Kim, D.H.; Kim, T.S. Multiple skin lesions diagnostics via integrated deep convolutional networks for segmentation and classification. Comput. Methods Programs Biomed. 2020, 190, 105351. [Google Scholar] [CrossRef]
  85. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE Trans. Med. Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef]
  86. Panayides, A.S.; Amini, A.; Filipovic, N.D.; Sharma, A.; Tsaftaris, S.A.; Young, A.; Foran, D.; Do, N.; Golemati, S.; Kurc, T.; et al. AI in Medical Imaging Informatics: Current Challenges and Future Directions. IEEE J. Biomed. Health Inform. 2020, 24, 1837–1857. [Google Scholar] [CrossRef]
  87. Wu, H.; Chen, S.; Chen, G.; Wang, W.; Lei, B.; Wen, Z. FAT-Net: Feature adaptive transformers for automated skin lesion segmentation. Med. Image Anal. 2022, 76, 102327. [Google Scholar] [CrossRef]
  88. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2020, 8, 4171–4181. [Google Scholar] [CrossRef]
  89. Moturi, D.; Surapaneni, R.K.; Avanigadda, V.S.G. Developing an efficient method for melanoma detection using CNN techniques. J. Egypt. Natl. Cancer Inst. 2024, 36, 6. [Google Scholar] [CrossRef]
  90. Kaur, R.; GholamHosseini, H. Deep Learning Model with Atrous Convolutions for Improving Skin Cancer Classification. In Image and Video Technology; Series Title: Lecture Notes in Computer Science; Yan, W.Q., Nguyen, M., Nand, P., Li, X., Eds.; Springer Nature: Singapore, 2024; Volume 14403, pp. 422–435. [Google Scholar] [CrossRef]
  91. Ju, X.; Lin, C.; Lee, S.; Wei, S. Melanoma classification using generative adversarial network and proximal policy optimization. Photochem. Photobiol. 2024. [Google Scholar] [CrossRef]
  92. Nagadevi, D.; Suman, K.; Lakshmi, P.S. An enhanced skin lesion detection and classification model using hybrid convolution-based ensemble learning model. Res. Biomed. Eng. 2024, 40, 347–372. [Google Scholar] [CrossRef]
  93. Yuan, W.; Du, Z.; Han, S. Semi-supervised skin cancer diagnosis based on self-feedback threshold focal learning. Discov. Oncol. 2024, 15, 180. [Google Scholar] [CrossRef]
  94. Wang, W.; Cao, C.; Wu, S.; Liu, X.; Su, H.; Tian, D. SSCD-Net: Semi-supervised Skin Cancer Diagnostical Network Combined with Curriculum Learning, Disease Relation and Clinical Information. In Proceedings of the 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 30 June–5 July 2024; pp. 1–8. [Google Scholar] [CrossRef]
  95. Munjal, G.; Bhardwaj, P.; Bhargava, V.; Singh, S.; Nagpal, N. SkinSage XAI: An explainable deep learning solution for skin lesion diagnosis. Health Care Sci. 2024, 3, 438–455. [Google Scholar] [CrossRef]
  96. Vishal; Mehta, S.; Singh, A. Multi-Modal Skin Cancer Diagnosis Using CNN and SVM on Dermoscopic and Clinical Images. In Proceedings of the 2024 3rd International Conference for Advancement in Technology (ICONAT), Goa, India, 6–8 September 2024; pp. 1–5. [Google Scholar] [CrossRef]
  97. Mehta, S.; Kaur, S. Automated Detection of Skin Lesions Using CNN-SVM: A Comparative Study. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), Kamand, India, 24–28 June 2024; pp. 1–7. [Google Scholar] [CrossRef]
  98. Faghihi, A.; Fathollahi, M.; Rajabi, R. Diagnosis of skin cancer using VGG16 and VGG19 based transfer learning models. Multimed. Tools Appl. 2023, 83, 57495–57510. [Google Scholar] [CrossRef]
  99. Ingle, Y.S. Deep Learning for Skin Cancer Classification: A Comparative Study of CNN and Vgg16 on HAM10000 Dataset. Commun. Appl. Nonlinear Anal. 2024, 31, 490–499. [Google Scholar] [CrossRef]
  100. Vidhyalakshmi, A.M.; Kanchana, M. Skin cancer classification using improved transfer learning model-based random forest classifier and golden search optimization. Int. J. Imaging Syst. Technol. 2024, 34, e22971. [Google Scholar] [CrossRef]
  101. Bello, A.; Ng, S.C.; Leung, M.F. Skin Cancer Classification Using Fine-Tuned Transfer Learning of DENSENET-121. Appl. Sci. 2024, 14, 7707. [Google Scholar] [CrossRef]
  102. Mishra, A.K.; Diwan, T.D.; Gupta, I.K.; Agrawal, S. Crow search algorithm with deep transfer learning driven skin lesion detection on dermoscopic images. Intell. Decis. Technol. 2024, 18, 417–426. [Google Scholar] [CrossRef]
Figure 1. The flow of exclusion criteria.
Figure 1. The flow of exclusion criteria.
Computation 13 00078 g001
Figure 2. The number of publication over the period 2019–2024.
Figure 2. The number of publication over the period 2019–2024.
Computation 13 00078 g002
Figure 3. Most cited countries.
Figure 3. Most cited countries.
Computation 13 00078 g003
Figure 4. Overlay visualisation based on the author’s keywords. The minimum number of occurrences is five, and 99 of the 2021 keywords meet the threshold.
Figure 4. Overlay visualisation based on the author’s keywords. The minimum number of occurrences is five, and 99 of the 2021 keywords meet the threshold.
Computation 13 00078 g004
Figure 5. Thematic map generated by Bibliometrix.
Figure 5. Thematic map generated by Bibliometrix.
Computation 13 00078 g005
Table 1. The number of documents based on document type.
Table 1. The number of documents based on document type.
NoDocument Type#Docs
1Article1098
2Conference paper556
3Review41
4Letter1
5Note1
Table 2. Fifteen most productive countries.
Table 2. Fifteen most productive countries.
NoCountry#Docs
1India316
2China275
3United States155
4Saudi Arabia87
5Pakistan84
6United Kingdom66
7South Korea49
8Egypt45
9Germany45
10Canada41
11Turkey40
12Australia39
13Bangladesh38
14Spain34
15Italy32
Table 3. The top-10 productive authors.
Table 3. The top-10 productive authors.
NoAuthorNum. of Docs.DocumentsCountry
1Khan, M.A.20 [28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45]Pakistan
2Akram, T.12 [28,30,39,40,41,42,43,46,47,48,49,50]Pakistan
3Sharif, M.9 [38,39,40,41,42,43,44,47,51]Pakistan
4Kaur, R.9 [52,53,54,55,56,57,58,59,60]New Zealand
5Brinker, T.J.9 [24,61,62,63,64,65,66,67]Germany
6Utikal, J.S.7 [24,62,64,65,66,67]Germany
7Tariq, U.7 [30,33,35,37,44,46,68]Saudi Arabia
8Kadry, S.7 [28,33,47,51,69,70,71]Norway
9Jiang, Y.7 [72,73,74,75,76,77,78]China
10Hekler, A.7 [24,63,64,65,66,67,79]Germany
Table 4. The top-15 of the most productive journals related to skin cancer detection using deep learning.
Table 4. The top-15 of the most productive journals related to skin cancer detection using deep learning.
NoSource#Docs
1IEEE Access (IEEE)40
2Computers in Biology and Medicine (Elsevier)35
3Diagnostics (MDPI)34
4Multimedia Tools and Applications (Springer)29
5Biomedical Signal Processing and Control (Elsevier)25
6Computer Methods and Programs in Biomedicine (Elsevier)23
7Sensors (MDPI)16
8Cancers (MDPI)14
9International Journal of Imaging Systems and Technology
(John Wiley and Son Inc.)
14
10Applied Sciences (MDPI)13
11Medical Image Analysis (Elsevier)13
12Expert Systems with Applications (Elsevier)12
13Computers, Materials and Continua (Tech Science Press)12
14IEEE Journal of Biomedical and Health Informatics (IEEE)12
15Frontiers in Medicine (Frontiers Media)9
Table 5. The top five most productive organizations related to skin cancer detection using deep learning.
Table 5. The top five most productive organizations related to skin cancer detection using deep learning.
NoOrganization#Docs
1Skin Cancer Unit, German Cancer Research Center (DKFZ),
Heidelberg, Germany
7
2Dept. of Dermatology, Heidelberg University,
Mannheim, Germany
6
3Dept. of Dermatology, University Hospital Essen,
Essen, Germany
6
4Chitkara University Institute of Engineering and Technology,
Chitkara University, Punjab, India
5
5Dept. of Dermatology, University Hospital Regensburg,
Regensburg, Germany
5
Table 6. The top 10 of the most cited publications.
Table 6. The top 10 of the most cited publications.
NoTitleYearSource#Cit
1Attention Residual Learning for Skin Lesion Classification [80]2019IEEE Transactions on Medical Imaging370
2Ca-Net: Comprehensive Attention Convolutional Neural Networks for Explainable Medical Image Segmentation [81]2021IEEE Transactions on Medical Imaging343
3Deep Learning Outperformed 136 Of 157 Dermatologists in A Head-To-Head Dermoscopic Melanoma Image Classification Task [67]2019European Journal of Cancer287
4Transformation-Consistent Self-Ensembling Model for Semisupervised Medical Image Segmentation [82]2021IEEE Transactions on Neural Networks and Learning Systems247
5Classification of Skin Lesions using Transfer Learning and Augmentation with Alex-Net [83]2019PLoS ONE223
6Multiple Skin Lesions Diagnostics via Integrated Deep Convolutional Networks for Segmentation and Classification [84]2020Computer Methods and Programs in Biomedicine221
7A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification [85]2020IEEE Transactions on Medical Imaging218
8AI in Medical Imaging Informatics: Current Challenges and Future Directions [86]2020IEEE Journal of Biomedical and Health Informatics205
9Fat-Net: Feature Adaptive Transformers for Automated Skin Lesion Segmentation [87]2022Medical Image Analysis198
10Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods [88]2020IEEE Access189
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Supriyanto, C.; Salam, A.; Zeniarja, J.; Utomo, D.W.; Dewi, I.N.; Paramita, C.; Wijaya, A.; Safar, N.Z.M. A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research. Computation 2025, 13, 78. https://doi.org/10.3390/computation13030078

AMA Style

Supriyanto C, Salam A, Zeniarja J, Utomo DW, Dewi IN, Paramita C, Wijaya A, Safar NZM. A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research. Computation. 2025; 13(3):78. https://doi.org/10.3390/computation13030078

Chicago/Turabian Style

Supriyanto, Catur, Abu Salam, Junta Zeniarja, Danang Wahyu Utomo, Ika Novita Dewi, Cinantya Paramita, Adi Wijaya, and Noor Zuraidin Mohd Safar. 2025. "A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research" Computation 13, no. 3: 78. https://doi.org/10.3390/computation13030078

APA Style

Supriyanto, C., Salam, A., Zeniarja, J., Utomo, D. W., Dewi, I. N., Paramita, C., Wijaya, A., & Safar, N. Z. M. (2025). A Bibliometric Review of Deep Learning Approaches in Skin Cancer Research. Computation, 13(3), 78. https://doi.org/10.3390/computation13030078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop