Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (531)

Search Parameters:
Keywords = computer aided detection and diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 6967 KB  
Article
LCxNet: An Explainable CNN Framework for Lung Cancer Detection in CT Images Using Multi-Optimizer and Visual Interpretability
by Noor S. Jozi and Ghaida A. Al-Suhail
Appl. Syst. Innov. 2025, 8(5), 153; https://doi.org/10.3390/asi8050153 - 15 Oct 2025
Abstract
Lung cancer, the leading cause of cancer-related mortality worldwide, necessitates better methods for earlier and more accurate detection. To this end, this study introduces LCxNet, a novel, custom-designed convolutional neural network (CNN) framework for computer-aided diagnosis (CAD) of lung cancer. The IQ-OTH/NCCD lung [...] Read more.
Lung cancer, the leading cause of cancer-related mortality worldwide, necessitates better methods for earlier and more accurate detection. To this end, this study introduces LCxNet, a novel, custom-designed convolutional neural network (CNN) framework for computer-aided diagnosis (CAD) of lung cancer. The IQ-OTH/NCCD lung CT dataset, which includes three different classes—benign, malignant, and normal—is used to train and assess the model. The framework is implemented using five optimizers, SGD, RMSProp, Adam, AdamW, and NAdam, to compare the learning behavior and performance stability. To bridge the gap between model complexity and clinical utility, we integrated Explainable AI (XAI) methods, specifically Grad-CAM for decision visualization and t-SNE for feature space analysis. With accuracy, specificity, and AUC values of 99.39%, 99.45%, and 100%, respectively, the results demonstrate that the LCxNet model outperformed the state-of-the-art models in terms of diagnostic performance. In conclusion, this study emphasizes how crucial XAI is to creating trustworthy and efficient clinical tools for the early detection of lung cancer. Full article
(This article belongs to the Special Issue Advancements in Deep Learning and Its Applications)
Show Figures

Figure 1

15 pages, 2364 KB  
Article
Optimized Lung Nodule Classification Using CLAHE-Enhanced CT Imaging and Swin Transformer-Based Deep Feature Extraction
by Dorsaf Hrizi, Khaoula Tbarki and Sadok Elasmi
J. Imaging 2025, 11(10), 346; https://doi.org/10.3390/jimaging11100346 - 4 Oct 2025
Viewed by 199
Abstract
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD [...] Read more.
Lung cancer remains one of the most lethal cancers globally. Its early detection is vital to improving survival rates. In this work, we propose a hybrid computer-aided diagnosis (CAD) pipeline for lung cancer classification using Computed Tomography (CT) scan images. The proposed CAD pipeline integrates ten image preprocessing techniques and ten pretrained deep learning models for feature extraction including convolutional neural networks and transformer-based architectures, and four classical machine learning classifiers. Unlike traditional end-to-end deep learning systems, our approach decouples feature extraction from classification, enhancing interpretability and reducing the risk of overfitting. A total of 400 model configurations were evaluated to identify the optimal combination. The proposed approach was evaluated on the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset, which comprises 1018 thoracic CT scans annotated by four thoracic radiologists. For the classification task, the dataset included a total of 6568 images labeled as malignant and 4849 images labeled as benign. Experimental results show that the best performing pipeline, combining Contrast Limited Adaptive Histogram Equalization, Swin Transformer feature extraction, and eXtreme Gradient Boosting, achieved an accuracy of 95.8%. Full article
(This article belongs to the Special Issue Advancements in Imaging Techniques for Detection of Cancer)
Show Figures

Figure 1

31 pages, 2508 KB  
Systematic Review
From Mammogram Analysis to Clinical Integration with Deep Learning in Breast Cancer Diagnosis
by Beibit Abdikenov, Tomiris Zhaksylyk, Aruzhan Imasheva and Dimash Rakishev
Informatics 2025, 12(4), 106; https://doi.org/10.3390/informatics12040106 - 2 Oct 2025
Viewed by 302
Abstract
Breast cancer is one of the main causes of cancer-related death for women worldwide, and enhancing patient outcomes still depends on early detection. The most common imaging technique for diagnosing and screening for breast cancer is mammography, which has a high potential for [...] Read more.
Breast cancer is one of the main causes of cancer-related death for women worldwide, and enhancing patient outcomes still depends on early detection. The most common imaging technique for diagnosing and screening for breast cancer is mammography, which has a high potential for early lesion detection. With an emphasis on the incorporation of deep learning (DL) techniques, this review examines the changing role of mammography in early breast cancer detection. We examine recent advancements in DL-based approaches for mammogram analysis, including tasks such as classification, segmentation, and lesion detection. Additionally, we assess the limitations of traditional mammographic methods and highlight how DL can enhance diagnostic accuracy, reduce false positives and negatives, and support clinical decision-making. The review emphasizes the potential of DL to assist radiologists in clinical decision-making, as well as increases in diagnostic accuracy and decreases in false positives and negatives. We also discuss issues like interpretability, generalization across populations, and data scarcity. This review summarizes the available data to highlight the revolutionary potential of DL-enhanced mammography in breast cancer screening and to suggest future research avenues for more reliable, transparent, and clinically useful AI-driven solutions. Full article
(This article belongs to the Section Medical and Clinical Informatics)
Show Figures

Figure 1

21 pages, 2189 KB  
Article
Hybrid CNN-Swin Transformer Model to Advance the Diagnosis of Maxillary Sinus Abnormalities on CT Images Using Explainable AI
by Mohammad Alhumaid and Ayman G. Fayoumi
Computers 2025, 14(10), 419; https://doi.org/10.3390/computers14100419 - 2 Oct 2025
Viewed by 245
Abstract
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and [...] Read more.
Accurate diagnosis of sinusitis is essential due to its widespread prevalence and its considerable impact on patient quality of life. While multiple imaging techniques are available for detecting maxillary sinus, computed tomography (CT) remains the preferred modality because of its high sensitivity and spatial resolution. Although recent advances in deep learning have led to the development of automated methods for sinusitis classification, many existing models perform poorly in the presence of complex pathological features and offer limited interpretability, which hinders their integration into clinical workflows. In this study, we propose a hybrid deep learning framework that combines EfficientNetB0, a convolutional neural network, with the Swin Transformer, a vision transformer, to improve feature representation. An attention-based fusion module is used to integrate both local and global information, thereby enhancing diagnostic accuracy. To improve transparency and support clinical adoption, the model incorporates explainable artificial intelligence (XAI) techniques using Gradient-weighted Class Activation Mapping (Grad-CAM). This allows for visualization of the regions influencing the model’s predictions, helping radiologists assess the clinical relevance of the results. We evaluate the proposed method on a curated maxillary sinus CT dataset covering four diagnostic categories: Normal, Opacified, Polyposis, and Retention Cysts. The model achieves a classification accuracy of 95.83%, with precision, recall, and F1 score all at 95%. Grad-CAM visualizations indicate that the model consistently focuses on clinically significant regions of the sinus anatomy, supporting its potential utility as a reliable diagnostic aid in medical practice. Full article
Show Figures

Figure 1

17 pages, 1581 KB  
Article
Curriculum Learning-Driven YOLO for Tumor Detection in Ultrasound Using Hierarchically Zoomed-In Images
by Yu Hyun Park, Hongseok Choi, Ki-Baek Lee and Hyungsuk Kim
Appl. Sci. 2025, 15(19), 10337; https://doi.org/10.3390/app151910337 - 23 Sep 2025
Viewed by 382
Abstract
Ultrasound imaging is widely employed for breast cancer detection; however, its diagnostic reliability is often constrained by operator dependence and subjective interpretation. Deep learning-based computer-aided diagnosis (CADx) systems offer potential to improve diagnostic consistency, yet their effectiveness is frequently limited by the scarcity [...] Read more.
Ultrasound imaging is widely employed for breast cancer detection; however, its diagnostic reliability is often constrained by operator dependence and subjective interpretation. Deep learning-based computer-aided diagnosis (CADx) systems offer potential to improve diagnostic consistency, yet their effectiveness is frequently limited by the scarcity of annotated medical images. This work introduces a training framework to enhance the performance and training stability of a YOLO-based object detection model for breast tumor localization, particularly in data-constrained scenarios. The proposed method integrates a detail-to-context curriculum learning scheme using hierarchically zoomed-in B-mode images, with progression difficulty determined by the tumor-to-background area ratio. A preprocessing step resizes all images to 640 × 640 pixels while preserving aspect ratio to improve intra-dataset consistency. Our evaluation indicates that aspect ratio-preserving resizing is associated with a 2.3% increase in recall and a reduction in the standard deviation of stability metrics by more than 20%. Moreover, the curriculum learning approach reached 97.2% of the final model performance using only 35% of the training data required by conventional methods, while achieving a more balanced precision–recall profile. These findings suggest that the proposed framework holds potential as an effective strategy for developing more robust and efficient tumor detection models, particularly for deployment in resource-limited clinical environments. Full article
(This article belongs to the Special Issue Current Updates on Ultrasound for Biomedical Applications)
Show Figures

Figure 1

20 pages, 4700 KB  
Article
Computer-Aided Diagnosis of Equine Pharyngeal Lymphoid Hyperplasia Using the Object Detection-Based Processing Technique of Digital Endoscopic Images
by Natalia Kozłowska, Marta Borowska, Tomasz Jasiński, Małgorzata Wierzbicka and Małgorzata Domino
Animals 2025, 15(18), 2758; https://doi.org/10.3390/ani15182758 - 22 Sep 2025
Viewed by 395
Abstract
In human medicine, computer-aided diagnosis (CAD) is increasingly employed for screening, identifying, and monitoring early endoscopic signs of various diseases. However, its potential—despite proven benefits in human healthcare—remains largely underexplored in equine veterinary medicine. This study aimed to quantify endoscopic signs of pharyngeal [...] Read more.
In human medicine, computer-aided diagnosis (CAD) is increasingly employed for screening, identifying, and monitoring early endoscopic signs of various diseases. However, its potential—despite proven benefits in human healthcare—remains largely underexplored in equine veterinary medicine. This study aimed to quantify endoscopic signs of pharyngeal lymphoid hyperplasia (PLH) as digital data and to assess their effectiveness in CAD of PLH in comparison and in combination with clinical data reflecting respiratory tract disease. Endoscopic images of the pharynx were collected from 70 horses clinically assessed as either healthy or affected by PLH. Digital data were extracted using an object detection-based processing technique and first-order statistics (FOS). The data were transformed using linear discriminant analysis (LDA) and classified with the random forest (RF) algorithm. Classification metrics were then calculated. When considering digital and clinical data, high classification performance was achieved (0.76 accuracy, 0.83 precision, 0.78 recall, and 0.76 F1 score), with the highest importance assigned to selected FOS features: Number of Objects and Neighbors, and Tracheal Auscultation. The proposed protocol of digitizing standard respiratory tract diagnostic methods provides effective discrimination of PLH grades, supporting the clinical value of CAD in veterinary medicine and paving the way for further research in digital medical diagnostics. Full article
(This article belongs to the Special Issue Animal–Computer Interaction: New Horizons in Animal Welfare)
Show Figures

Figure 1

13 pages, 3028 KB  
Article
Structural Brain Abnormalities, Diagnostic Approaches, and Treatment Strategies in Vertigo: A Case-Control Study
by Klaudia Széphelyi, Szilvia Kóra, Gergely Orsi and József Tollár
Neurol. Int. 2025, 17(9), 146; https://doi.org/10.3390/neurolint17090146 - 10 Sep 2025
Viewed by 653
Abstract
Background/Objectives: Dizziness is a frequent medical complaint with neurological, otolaryngological, and psychological origins. Imaging studies such as CT (Computer Tomography), cervical X-rays, and ultrasound aid diagnosis, while MRI (Magnetic Resonance Imaging) is crucial for detecting brain abnormalities. Our purpose is to identify structural [...] Read more.
Background/Objectives: Dizziness is a frequent medical complaint with neurological, otolaryngological, and psychological origins. Imaging studies such as CT (Computer Tomography), cervical X-rays, and ultrasound aid diagnosis, while MRI (Magnetic Resonance Imaging) is crucial for detecting brain abnormalities. Our purpose is to identify structural brain changes associated with vertigo, assess pre-MRI diagnostic approaches, and evaluate treatment strategies. Methods: A case-control study of 232 vertigo patients and 232 controls analyzed MRI findings, pre-MRI examinations, symptoms, and treatments. Statistical comparisons were performed using chi-square and t-tests (p < 0.05). Results: White matter lesions, lacunar infarcts, Circle of Willis variations, and sinusitis were significantly more frequent in vertigo patients (p < 0.05). Pre-MRI diagnostics frequently identified atherosclerosis (ultrasound) and spondylosis (X-ray). Common symptoms included headache, imbalance, and visual disturbances. The most frequent post-MRI diagnosis was Benign Paroxysmal Positional Vertigo (BPPV). Treatments included lifestyle modifications, physical therapy (e.g., Epley maneuver), and pharmacological therapies such as betahistine. Conclusions: MRI revealed structural brain changes linked to vertigo. Pre-MRI assessments are essential for ruling out vascular and musculoskeletal causes. A multidisciplinary treatment approach is recommended. Trial Registration: This study was registered in ClinicalTrials.gov with the trial registration number NCT06848712 on 22 February 2025. Full article
Show Figures

Graphical abstract

14 pages, 954 KB  
Article
A YOLO Ensemble Framework for Detection of Barrett’s Esophagus Lesions in Endoscopic Images
by Wan-Chih Lin, Chi-Chih Wang, Ming-Chang Tsai, Chao-Yen Huang, Chun-Che Lin and Ming-Hseng Tseng
Diagnostics 2025, 15(18), 2290; https://doi.org/10.3390/diagnostics15182290 - 10 Sep 2025
Viewed by 478
Abstract
Background and Objectives: Barrett’s Esophagus (BE) is a precursor to esophageal adenocarcinoma, and early detection is essential to reduce cancer risk. This study aims to develop a YOLO-based ensemble framework to improve the automated detection of BE-associated mucosal lesions on endoscopic images. [...] Read more.
Background and Objectives: Barrett’s Esophagus (BE) is a precursor to esophageal adenocarcinoma, and early detection is essential to reduce cancer risk. This study aims to develop a YOLO-based ensemble framework to improve the automated detection of BE-associated mucosal lesions on endoscopic images. Methods: A dataset of 3620 annotated endoscopic images was collected from 132 patients. Five YOLO variants, YOLOv5, YOLOv9, YOLOv10, YOLOv11, and YOLOv12, were selected based on their architectural diversity and detection capabilities. Each model was trained individually, and their outputs were integrated using a Non-Maximum Suppression (NMS)-based ensemble strategy. Multiple ensemble configurations were evaluated to assess the impact of fusion depth on detection performance. Results: The ensemble models consistently outperformed individual YOLO variants in recall, the primary evaluation metric. The entire five-model ensemble achieved the highest recall (0.974), significantly reducing missed lesion detections. Statistical analysis using McNemar’s test and bootstrap confidence intervals confirmed the superiority in most comparisons. Conclusions: The proposed YOLO ensemble framework demonstrates enhanced sensitivity and robustness in detecting BE lesions. Its integration into clinical workflows can improve early diagnosis and reduce diagnostic workload, offering a promising tool for computer-aided screening in gastroenterology. Full article
Show Figures

Figure 1

30 pages, 3045 KB  
Article
A Retrospective Study of CBCT-Based Detection of Endodontic Failures and Periapical Lesions in a Romanian Cohort
by Oana Andreea Diaconu, Lelia Mihaela Gheorghiță, Anca Gabriela Gheorghe, Mihaela Jana Țuculină, Maria Cristina Munteanu, Cătălina Alexandra Iacov, Virginia Maria Rădulescu, Mihaela Ionescu, Adina Andreea Mirea and Carina Alexandra Bănică
J. Clin. Med. 2025, 14(18), 6364; https://doi.org/10.3390/jcm14186364 - 9 Sep 2025
Viewed by 741
Abstract
Background and Objectives: Cone Beam Computed Tomography (CBCT) offers high-resolution, three-dimensional imaging for detecting apical periodontitis (AP) and evaluating the technical quality of endodontic treatments. This study aimed to investigate the diagnostic value of CBCT in identifying endodontic failures and periapical lesions [...] Read more.
Background and Objectives: Cone Beam Computed Tomography (CBCT) offers high-resolution, three-dimensional imaging for detecting apical periodontitis (AP) and evaluating the technical quality of endodontic treatments. This study aimed to investigate the diagnostic value of CBCT in identifying endodontic failures and periapical lesions and to explore the clinical patterns associated with these findings in a Romanian patient cohort. Materials and Methods: A retrospective study was conducted on 258 patients (with 876 root canal-treated teeth), all of whom underwent CBCT imaging between October 2024 and April 2025 at a private radiology center in Craiova, Romania. Of the 876 treated teeth, 409 were diagnosed with apical periodontitis. Patients were present for endodontic treatment at the Endodontics Clinic of the Faculty of Dentistry, University of Medicine and Pharmacy of Craiova. With the patients’ consent, 3D radiological examinations were recommended for better case planning and accurate diagnosis. The periapical status and technical parameters of root canal fillings were assessed using the CBCT-PAI index and evaluated by three calibrated observers. Associations with demographic, clinical, and behavioral factors were statistically analyzed. Results: Apical periodontitis was detected in 46.69% of the teeth examined during the study period, with CBCT-PAI score 3 being the most prevalent. Poor root canal obturation quality (underfilling, overfilling, and voids) was significantly associated with periapical pathology. Chronic lesions were more common than acute ones, especially in older patients. The number of teeth with endodontic treatments and no AP, as well as the number of teeth with AP, was significantly lower for patients with acute AP, indicating the more severe impact of chronic AP on the patients’ oral health status. CBCT allowed the precise localization of missed canals and assessment of lesion severity. Conclusions: Within the limits of a retrospective, referral-based cohort, CBCT aided the detection of periapical pathology in root canal-treated teeth (46.69%). These findings do not represent population-based rates but support the selective use of CBCT, in line with current ESE guidance, for complex cases or when conventional imaging is inconclusive. Full article
(This article belongs to the Special Issue Oral Health in Children: Clinical Management)
Show Figures

Figure 1

16 pages, 715 KB  
Systematic Review
Artificial Intelligence in Computed Tomography Radiology: A Systematic Review on Risk Reduction Potential
by Sandra Coelho, Aléxia Fernandes, Marco Freitas and Ricardo J. Fernandes
Appl. Sci. 2025, 15(17), 9659; https://doi.org/10.3390/app15179659 - 2 Sep 2025
Viewed by 1074
Abstract
Artificial intelligence (AI) has emerged as a transformative technology in radiology, offering enhanced diagnostic accuracy, improved workflow efficiency and potential risk mitigation. However, its effectiveness in reducing clinical and occupational risks in radiology departments remains underexplored. This systematic review aimed to evaluate the [...] Read more.
Artificial intelligence (AI) has emerged as a transformative technology in radiology, offering enhanced diagnostic accuracy, improved workflow efficiency and potential risk mitigation. However, its effectiveness in reducing clinical and occupational risks in radiology departments remains underexplored. This systematic review aimed to evaluate the current literature on AI applications in computed tomography (CT) radiology and their contributions to risk reduction. Following the PRISMA 2020 guidelines, a systematic search was conducted in PubMed, Scopus and Web of Science for studies published between 2021 and 2025 (the databases were last accessed on 15 April 2025). Thirty-four studies were included based on their relevance to AI in radiology and reported outcomes. Extracted data included study type, geographic region, AI application and type, role in clinical workflow, use cases, sensitivity and specificity. The majority of studies addressed triage (61.8%) and computer-aided detection (32.4%). AI was most frequently applied in chest imaging (47.1%) and brain haemorrhage detection (29.4%). The mean reported sensitivity was 89.0% and specificity was 93.3%. AI tools demonstrated advantages in image interpretation, automated patient positioning, prioritisation and measurement standardisation. Reported benefits included reduced cognitive workload, improved triage efficiency, decreased manual annotation and shorter exposure times. AI systems in CT radiology show strong potential to enhance diagnostic consistency and reduce occupational risks. The evidence supports the integration of AI-based tools to assist diagnosis, lower human workload and improve overall safety in radiology departments. Full article
Show Figures

Figure 1

14 pages, 1542 KB  
Article
Comparative Analysis of Diagnostic Performance Between Elastography and AI-Based S-Detect for Thyroid Nodule Detection
by Jee-Yeun Park and Sung-Hee Yang
Diagnostics 2025, 15(17), 2191; https://doi.org/10.3390/diagnostics15172191 - 29 Aug 2025
Viewed by 746
Abstract
Background/Objectives: Elastography is a non-invasive imaging technique that assesses tissue stiffness and elasticity. This study aimed to evaluate the diagnostic performance and clinical utility of elastography and S-detect in distinguishing benign from malignant thyroid nodules. S-detect (RS85) is a deep learning-based computer-aided diagnosis [...] Read more.
Background/Objectives: Elastography is a non-invasive imaging technique that assesses tissue stiffness and elasticity. This study aimed to evaluate the diagnostic performance and clinical utility of elastography and S-detect in distinguishing benign from malignant thyroid nodules. S-detect (RS85) is a deep learning-based computer-aided diagnosis (DL-CAD) software that analyzes grayscale ultrasound 2D images to evaluate the morphological characteristics of thyroid nodules, providing a visual guide to the likelihood of malignancy. Method: This retrospective study included 159 patients (61 male and 98 female) aged 30–83 years (56.14 ± 11.35) who underwent thyroid ultrasonography between January 2023 and June 2024. All the patients underwent elastography, S-detect analysis, and fine needle aspiration cytology (FNAC). Malignancy status was determined based on the FNAC findings, and the diagnostic performance of the elasticity contrast index (ECI), S-detect, and evaluations by a radiologist were assessed. Based on the FNAC results, 101 patients (63.5%) had benign nodules and 58 patients (36.5%) had malignant nodules. Results: Radiologist interpretation demonstrated the highest diagnostic accuracy (area under the curve 89%), with a sensitivity of 98.28%, specificity of 79.21%, positive predictive value (PPV) of 73.1%, and negative predictive value (NPV) of 98.8%. The elasticity contrast index showed an accuracy of 85%, sensitivity of 87.93%, specificity of 81.19%, PPV of 72.9%, and NPV of 92.1%. S-detect yielded the lowest accuracy at 78%, with a sensitivity of 87.93%, specificity of 68.32%, PPV of 61.4%, and NPV of 90.8%. Conclusions: These findings offer valuable insights into the comparative diagnostic utility of elastography and AI-based S-detect for thyroid nodules in clinical practice. Although limited by its single-center design and sample size, which potentially limits the generalization of the results, the controlled environment ensured consistency and minimized confounding variables. Full article
(This article belongs to the Special Issue The Role of AI in Ultrasound)
Show Figures

Figure 1

19 pages, 5315 KB  
Article
Style-Aware and Uncertainty-Guided Approach to Semi-Supervised Domain Generalization in Medical Imaging
by Zineb Tissir, Yunyoung Chang and Sang-Woong Lee
Mathematics 2025, 13(17), 2763; https://doi.org/10.3390/math13172763 - 28 Aug 2025
Viewed by 630
Abstract
Deep learning has significantly advanced medical image analysis by enabling accurate, automated diagnosis across diverse clinical tasks such as lesion classification and disease detection. However, the practical deployment of these systems is still hindered by two major challenges: the limited availability of expert-annotated [...] Read more.
Deep learning has significantly advanced medical image analysis by enabling accurate, automated diagnosis across diverse clinical tasks such as lesion classification and disease detection. However, the practical deployment of these systems is still hindered by two major challenges: the limited availability of expert-annotated data and substantial domain shifts caused by variations in imaging devices, acquisition protocols, and patient populations. Although recent semi-supervised domain generalization (SSDG) approaches attempt to address these challenges, they often suffer from two key limitations: (i) reliance on computationally expensive uncertainty modeling techniques such as Monte Carlo dropout, and (ii) inflexible shared-head classifiers that fail to capture domain-specific variability across heterogeneous imaging styles. To overcome these limitations, we propose MultiStyle-SSDG, a unified semi-supervised domain generalization framework designed to improve model generalization in low-label scenarios. Our method introduces a multi-style ensemble pseudo-labeling strategy guided by entropy-based filtering, incorporates prototype-based conformity and semantic alignment to regularize the feature space, and employs a domain-specific multi-head classifier fused through attention-weighted prediction. Additionally, we introduce a dual-level neural-style transfer pipeline that simulates realistic domain shifts while preserving diagnostic semantics. We validated our framework on the ISIC2019 skin lesion classification benchmark using 5% and 10% labeled data. MultiStyle-SSDG consistently outperformed recent state-of-the-art methods such as FixMatch, StyleMatch, and UPLM, achieving statistically significant improvements in classification accuracy under simulated domain shifts including style, background, and corruption. Specifically, our method achieved 78.6% accuracy with 5% labeled data and 80.3% with 10% labeled data on ISIC2019, surpassing FixMatch by 4.9–5.3 percentage points and UPLM by 2.1–2.4 points. Ablation studies further confirmed the individual contributions of each component, and t-SNE visualizations illustrate enhanced intra-class compactness and cross-domain feature consistency. These results demonstrate that our style-aware, modular framework offers a robust and scalable solution for generalizable computer-aided diagnosis in real-world medical imaging settings. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

20 pages, 2807 KB  
Article
Deep Learning-Based Hybrid Scenario for Classification of Periapical Lesions in Cone Beam Computed Tomography
by Fatma Akalin and Yasin Özkan
Symmetry 2025, 17(9), 1392; https://doi.org/10.3390/sym17091392 - 26 Aug 2025
Viewed by 850
Abstract
Artificial intelligence has made revolutionary advances in medical imaging in recent years. Various algorithms and techniques are used in this scientific field to significantly improve the accuracy and speed of medical diagnosis and classification processes. In this direction, approaches have been improved, from [...] Read more.
Artificial intelligence has made revolutionary advances in medical imaging in recent years. Various algorithms and techniques are used in this scientific field to significantly improve the accuracy and speed of medical diagnosis and classification processes. In this direction, approaches have been improved, from the past to the present, to extract meaningful features from dental images and classify them accurately. Especially, high asymmetry in morphological balance, play a critical role in distinguishing pathological patterns from normal anatomy. In this study, we propose a scenario for the classification of periapical lesions, supported by a combination of improved image processing techniques and regularization strategies integrated into the VGG16 transfer learning architecture, as the experience and time criteria required for manual interpretation of lesion detection confirm the need for a computer-aided system. In this study, which was conducted on the UFPE public dataset, an improvement in the performance of the VGG16 transfer learning architecture was achieved, with 18 different regularization methods proposed. These values indicate optimized training within the parameters of avoiding overfitting, stability, generalizability, and high accuracy. This optimization has the potential to use as a decision support system for diagnosis and treatment processes in various subfields of the medical world. Full article
(This article belongs to the Special Issue Symmetry in Computational Intelligence and Applications)
Show Figures

Figure 1

23 pages, 5200 KB  
Article
Genomic Insights into Tumorigenesis in Newly Diagnosed Multiple Myeloma
by Marina Kyriakou and Costas Papaloukas
Diagnostics 2025, 15(17), 2130; https://doi.org/10.3390/diagnostics15172130 - 23 Aug 2025
Viewed by 728
Abstract
Background: Multiple Myeloma (MM) is a malignant plasma cell dyscrasia that progresses through the consecutive asymptomatic, often undiagnosed, precancerous stages of Monoclonal Gammopathy of Undetermined Significance (MGUS) and Asymptomatic Multiple Myeloma (SMM). MM is characterized by low survival rates, severe complications and [...] Read more.
Background: Multiple Myeloma (MM) is a malignant plasma cell dyscrasia that progresses through the consecutive asymptomatic, often undiagnosed, precancerous stages of Monoclonal Gammopathy of Undetermined Significance (MGUS) and Asymptomatic Multiple Myeloma (SMM). MM is characterized by low survival rates, severe complications and drug resistance; therefore, understanding the molecular mechanisms of progression is crucial. This study aims to detect genetic mutations, both germline and somatic, that contribute to disease progression and drive tumorigenesis at the final stage of MM, using samples from patients presenting MGUS or SMM, and newly diagnosed MM patients. Methods: Mutations were identified through a fully computational pipeline, implemented in a Linux and RStudio environment, applied to each patient sequence, obtained through single-cell RNA-sequencing (scRNA-seq), separately. Structural and functional mutation types were identified by stage, along with the affected genes. The analysis included quality control, removal of the Unique Molecular Identifiers (UMIs), trimming, genome mapping and result visualization. Results: The findings revealed frequent germline and somatic mutations, with distinct structural and functional patterns across disease stages. Mutations in key genes were identified, pointing to molecules that may play a central role in carcinogenesis and disease progression. Notable examples include the HLA-A, HLA-B and HLA-C genes, as well as the KIF, EP400 and KDM gene families, with the first four already confirmed. Comparative analysis between the stages highlighted molecular transition events from one stage to another. Emphasis was given to novel genes discovered in newly diagnosed MM patients, that might contribute to the tumorigenesis that takes place. Conclusions: This study contributes to the understanding of the genetic basis of plasma cell dyscrasias and the transition events between the stages, offering insights that could aid in early detection and diagnosis, guide the development of personalized therapeutic strategies, and improve the understanding of mechanisms responsible for resistance to existing therapies. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

26 pages, 5268 KB  
Article
Blurred Lesion Image Segmentation via an Adaptive Scale Thresholding Network
by Qi Chen, Wenmin Wang, Zhibing Wang, Haomei Jia and Minglu Zhao
Appl. Sci. 2025, 15(17), 9259; https://doi.org/10.3390/app15179259 - 22 Aug 2025
Viewed by 590
Abstract
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are [...] Read more.
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are also critical but prone to detail loss during downsampling, reducing segmentation accuracy. To address these issues, we propose a novel adaptive scale thresholding network (AdSTNet) that acts as a post-processing lightweight network for enhancing sensitivity to lesion edges and cores through a dual-threshold adaptive mechanism. The dual-threshold adaptive mechanism is a key architectural component that includes a main threshold map for core localization and an edge threshold map for more precise boundary detection. AdSTNet is compatible with any segmentation network and introduces only a small computational and parameter cost. Additionally, Spatial Attention and Channel Attention (SACA), the Laplacian operator, and the Fusion Enhancement module are introduced to improve feature processing. SACA enhances spatial and channel attention for core localization; the Laplacian operator retains edge details without added complexity; and the Fusion Enhancement module adapts concatenation operation and Convolutional Gated Linear Unit (ConvGLU) to improve feature intensities to improve edge and small lesion segmentation. Experiments show that AdSTNet achieves notable performance gains on ISIC 2018, BUSI, and Kvasir-SEG datasets. Compared with the original U-Net, our method attains mIoU/mDice of 83.40%/90.24% on ISIC, 71.66%/80.32% on BUSI, and 73.08%/81.91% on Kvasir-SEG. Moreover, similar improvements are observed in the rest of the networks. Full article
Show Figures

Figure 1

Back to TopTop