Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (133)

Search Parameters:
Keywords = deep learning-based radiomics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2389 KiB  
Article
CT-Based Habitat Radiomics Combining Multi-Instance Learning for Early Prediction of Post-Neoadjuvant Lymph Node Metastasis in Esophageal Squamous Cell Carcinoma
by Qinghe Peng, Shumin Zhou, Runzhe Chen, Jinghui Pan, Xin Yang, Jinlong Du, Hongdong Liu, Hao Jiang, Xiaoyan Huang, Haojiang Li and Li Chen
Bioengineering 2025, 12(8), 813; https://doi.org/10.3390/bioengineering12080813 - 28 Jul 2025
Abstract
Early prediction of lymph node metastasis (LNM) following neoadjuvant therapy (NAT) is crucial for timely treatment optimization in esophageal squamous cell carcinoma (ESCC). This study developed and validated a computed tomography-based radiomic model for predicting pathologically confirmed LNM status at the time of [...] Read more.
Early prediction of lymph node metastasis (LNM) following neoadjuvant therapy (NAT) is crucial for timely treatment optimization in esophageal squamous cell carcinoma (ESCC). This study developed and validated a computed tomography-based radiomic model for predicting pathologically confirmed LNM status at the time of surgery in ESCC patients after NAT. A total of 469 ESCC patients from Sun Yat-sen University Cancer Center were retrospectively enrolled and randomized into a training cohort (n = 328) and a test cohort (n = 141). Three signatures were constructed: the tumor-habitat-based signature (Habitat_Rad), derived from radiomic features of three tumor subregions identified via K-means clustering; the multiple instance learning-based signature (MIL_Rad), combining features from 2.5D deep learning models; and the clinicoradiological signature (Clinic), developed through multivariate logistic regression. A combined radiomic nomogram integrating these signatures outperformed the individual models, achieving areas under the curve (AUCs) of 0.929 (95% CI, 0.901–0.957) and 0.852 (95% CI, 0.778–0.925) in the training and test cohorts, respectively. The decision curve analysis confirmed a high net clinical benefit, highlighting the nomogram’s potential for accurate LNM prediction after NAT and guiding individualized therapy. Full article
(This article belongs to the Special Issue Machine Learning Methods for Biomedical Imaging)
22 pages, 684 KiB  
Review
Radiomics Beyond Radiology: Literature Review on Prediction of Future Liver Remnant Volume and Function Before Hepatic Surgery
by Fabrizio Urraro, Giulia Pacella, Nicoletta Giordano, Salvatore Spiezia, Giovanni Balestrucci, Corrado Caiazzo, Claudio Russo, Salvatore Cappabianca and Gianluca Costa
J. Clin. Med. 2025, 14(15), 5326; https://doi.org/10.3390/jcm14155326 - 28 Jul 2025
Abstract
Background: Post-hepatectomy liver failure (PHLF) is the most worrisome complication after a major hepatectomy and is the leading cause of postoperative mortality. The most important predictor of PHLF is the future liver remnant (FLR), the volume of the liver that will remain after [...] Read more.
Background: Post-hepatectomy liver failure (PHLF) is the most worrisome complication after a major hepatectomy and is the leading cause of postoperative mortality. The most important predictor of PHLF is the future liver remnant (FLR), the volume of the liver that will remain after the hepatectomy, representing a major concern for hepatobiliary surgeons, radiologists, and patients. Therefore, an accurate preoperative assessment of the FLR and the prediction of PHLF are crucial to minimize risks and enhance patient outcomes. Recent radiomics and deep learning models show potential in predicting PHLF and the FLR by integrating imaging and clinical data. However, most studies lack external validation and methodological homogeneity and rely on small, single-center cohorts. This review outlines current CT-based approaches for surgical risk stratification and key limitations hindering clinical translation. Methods: A literature analysis was performed on the PubMed Dataset. We reviewed original articles using the subsequent keywords: [(Artificial intelligence OR radiomics OR machine learning OR deep learning OR neural network OR texture analysis) AND liver resection AND CT]. Results: Of 153 pertinent papers found, we underlined papers about the prediction of PHLF and about the FLR. Models were built according to machine learning (ML) and deep learning (DL) automatic algorithms. Conclusions: Radiomics models seem reliable and applicable to clinical practice in the preoperative prediction of PHLF and the FLR in patients undergoing major liver surgery. Further studies are required to achieve larger validation cohorts. Full article
(This article belongs to the Special Issue Advances in Gastroenterological Surgery)
14 pages, 330 KiB  
Review
Integrating Radiomics and Deep-Learning for Prognostic Evaluation in Nasopharyngeal Carcinoma
by Irina Maria Pușcaș, Anda Gâta, Alexandra Roman, Silviu Albu, Vlad Alexandru Gâta and Alexandru Irimie
Medicina 2025, 61(7), 1310; https://doi.org/10.3390/medicina61071310 - 21 Jul 2025
Viewed by 125
Abstract
Nasopharyngeal carcinoma (NPC) represents a prevalent malignant tumor within the head and neck region, and enhancing the precision of prognostic assessments is a critical objective. Recent advancements in the integration of artificial intelligence (AI) and medical imaging have spurred a surge in research [...] Read more.
Nasopharyngeal carcinoma (NPC) represents a prevalent malignant tumor within the head and neck region, and enhancing the precision of prognostic assessments is a critical objective. Recent advancements in the integration of artificial intelligence (AI) and medical imaging have spurred a surge in research focusing on NPC image analysis through AI applications, particularly employing radiomics and artificial neural network approaches. This review provides a detailed examination of the prognostic advancement in NPC, utilizing imaging studies based on radiomics and deep learning techniques. The findings from these studies offer a promising outlook for achieving exceptionally precise prognoses regarding survival and treatment responses in NPC. The limitations of existing research and the potential for further application of radiomics and deep learning in NPC imaging are explored. It is recommended that future research efforts should aim to develop a comprehensive, labeled dataset of NPC images and prioritize studies that leverage AI for NPC screening. Full article
(This article belongs to the Section Oncology)
28 pages, 1727 KiB  
Review
Computational and Imaging Approaches for Precision Characterization of Bone, Cartilage, and Synovial Biomolecules
by Rahul Kumar, Kyle Sporn, Vibhav Prabhakar, Ahab Alnemri, Akshay Khanna, Phani Paladugu, Chirag Gowda, Louis Clarkson, Nasif Zaman and Alireza Tavakkoli
J. Pers. Med. 2025, 15(7), 298; https://doi.org/10.3390/jpm15070298 - 9 Jul 2025
Viewed by 489
Abstract
Background/Objectives: Degenerative joint diseases (DJDs) involve intricate molecular disruptions within bone, cartilage, and synovial tissues, often preceding overt radiographic changes. These tissues exhibit complex biomolecular architectures and their degeneration leads to microstructural disorganization and inflammation that are challenging to detect with conventional imaging [...] Read more.
Background/Objectives: Degenerative joint diseases (DJDs) involve intricate molecular disruptions within bone, cartilage, and synovial tissues, often preceding overt radiographic changes. These tissues exhibit complex biomolecular architectures and their degeneration leads to microstructural disorganization and inflammation that are challenging to detect with conventional imaging techniques. This review aims to synthesize recent advances in imaging, computational modeling, and sequencing technologies that enable high-resolution, non-invasive characterization of joint tissue health. Methods: We examined advanced modalities including high-resolution MRI (e.g., T1ρ, sodium MRI), quantitative and dual-energy CT (qCT, DECT), and ultrasound elastography, integrating them with radiomics, deep learning, and multi-scale modeling approaches. We also evaluated RNA-seq, spatial transcriptomics, and mass spectrometry-based proteomics for omics-guided imaging biomarker discovery. Results: Emerging technologies now permit detailed visualization of proteoglycan content, collagen integrity, mineralization patterns, and inflammatory microenvironments. Computational frameworks ranging from convolutional neural networks to finite element and agent-based models enhance diagnostic granularity. Multi-omics integration links imaging phenotypes to gene and protein expression, enabling predictive modeling of tissue remodeling, risk stratification, and personalized therapy planning. Conclusions: The convergence of imaging, AI, and molecular profiling is transforming musculoskeletal diagnostics. These synergistic platforms enable early detection, multi-parametric tissue assessment, and targeted intervention. Widespread clinical integration requires robust data infrastructure, regulatory compliance, and physician education, but offers a pathway toward precision musculoskeletal care. Full article
(This article belongs to the Special Issue Cutting-Edge Diagnostics: The Impact of Imaging on Precision Medicine)
Show Figures

Figure 1

28 pages, 2586 KiB  
Review
Diagnostic, Therapeutic, and Prognostic Applications of Artificial Intelligence (AI) in the Clinical Management of Brain Metastases (BMs)
by Kyriacos Evangelou, Panagiotis Zemperligkos, Anastasios Politis, Evgenia Lani, Enrique Gutierrez-Valencia, Ioannis Kotsantis, Georgios Velonakis, Efstathios Boviatsis, Lampis C. Stavrinou and Aristotelis Kalyvas
Brain Sci. 2025, 15(7), 730; https://doi.org/10.3390/brainsci15070730 - 8 Jul 2025
Viewed by 563
Abstract
Brain metastases (BMs) are the most common intracranial tumors in adults. Their heterogeneity, potential multifocality, and complex biomolecular behavior pose significant diagnostic and therapeutic challenges. Artificial intelligence (AI) has the potential to revolutionize BM diagnosis by facilitating early lesion detection, precise imaging segmentation, [...] Read more.
Brain metastases (BMs) are the most common intracranial tumors in adults. Their heterogeneity, potential multifocality, and complex biomolecular behavior pose significant diagnostic and therapeutic challenges. Artificial intelligence (AI) has the potential to revolutionize BM diagnosis by facilitating early lesion detection, precise imaging segmentation, and non-invasive molecular characterization. Machine learning (ML) and deep learning (DL) models have shown promising results in differentiating BMs from other intracranial tumors with similar imaging characteristics—such as gliomas and primary central nervous system lymphomas (PCNSLs)—and predicting tumor features (e.g., genetic mutations) that can guide individualized and targeted therapies. Intraoperatively, AI-driven systems can enable optimal tumor resection by integrating functional brain maps into preoperative imaging, thus facilitating the identification and safeguarding of eloquent brain regions through augmented reality (AR)-assisted neuronavigation. Even postoperatively, AI can be instrumental for radiotherapy planning personalization through the optimization of dose distribution, maximizing disease control while minimizing adjacent healthy tissue damage. Applications in systemic chemo- and immunotherapy include predictive insights into treatment responses; AI can analyze genomic and radiomic features to facilitate the selection of the most suitable, patient-specific treatment regimen, especially for those whose disease demonstrates specific genetic profiles such as epidermal growth factor receptor mutations (e.g., EGFR, HER2). Moreover, AI-based prognostic models can significantly ameliorate survival and recurrence risk prediction, further contributing to follow-up strategy personalization. Despite these advancements and the promising landscape, multiple challenges—including data availability and variability, decision-making interpretability, and ethical, legal, and regulatory concerns—limit the broader implementation of AI into the everyday clinical management of BMs. Future endeavors should thus prioritize the development of generalized AI models, the combination of large and diverse datasets, and the integration of clinical and molecular data into imaging, in an effort to maximally enhance the clinical application of AI in BM care and optimize patient outcomes. Full article
(This article belongs to the Section Neuro-oncology)
Show Figures

Figure 1

23 pages, 5584 KiB  
Article
Machine Learning and Deep Learning Hybrid Approach Based on Muscle Imaging Features for Diagnosis of Esophageal Cancer
by Yuan Hong, Hanlin Wang, Qi Zhang, Peng Zhang, Kang Cheng, Guodong Cao, Renquan Zhang and Bo Chen
Diagnostics 2025, 15(14), 1730; https://doi.org/10.3390/diagnostics15141730 - 8 Jul 2025
Viewed by 351
Abstract
Background: The rapid advancement of radiomics and artificial intelligence (AI) technology has provided novel tools for the diagnosis of esophageal cancer. This study innovatively combines muscle imaging features with conventional esophageal imaging features to construct deep learning diagnostic models. Methods: This [...] Read more.
Background: The rapid advancement of radiomics and artificial intelligence (AI) technology has provided novel tools for the diagnosis of esophageal cancer. This study innovatively combines muscle imaging features with conventional esophageal imaging features to construct deep learning diagnostic models. Methods: This retrospective study included 1066 patients undergoing radical esophagectomy. Preoperative computed tomography (CT) images covering esophageal, stomach, and muscle (bilateral iliopsoas and erector spinae) regions were segmented automatically with manual adjustments. Diagnostic models were developed using deep learning (2D and 3D neural networks) and traditional machine learning (11 algorithms with PyRadiomics-derived features). Multimodal features underwent Principal Component Analysis (PCA) for dimension reduction and were fused for final analysis. Results: Comparative analysis of 1066 patients’ CT imaging revealed the muscle-based model outperformed the esophageal plus stomach model in predicting N2 staging (0.63 ± 0.11 vs. 0.52 ± 0.11, p = 0.03). Subsequently, multimodal fusion models were established for predicting pathological subtypes, T staging, and N staging. The logistic regression (LR) fusion model showed optimal performance in predicting pathological subtypes, achieving accuracy (ACC) of 0.919 in the training set and 0.884 in the validation set. For predicting T staging, the support vector machine (SVM) model demonstrated the highest accuracy, with training and validation accuracies of 0.909 and 0.907, respectively. The multilayer perceptron (MLP) fusion model achieved the best performance among all models tested for N staging prediction, although the accuracy remained moderate (ACC = 0.704 in the training set and 0.685 in the validation set), indicating potential for further optimization. Fusion models significantly outperformed single-modality models. Conclusions: Based on CT imaging data from 1066 patients, this study systematically constructed predictive models for pathological subtypes, T staging, and N staging of esophageal cancer. Comparative analysis of models using esophageal, esophageal plus stomach, and muscle modalities demonstrated that muscle imaging features contribute to diagnostic accuracy. Multimodal fusion models consistently showed superior performance. Full article
Show Figures

Figure 1

14 pages, 4768 KiB  
Article
Deep Learning with Transfer Learning on Digital Breast Tomosynthesis: A Radiomics-Based Model for Predicting Breast Cancer Risk
by Francesca Galati, Roberto Maroncelli, Chiara De Nardo, Lucia Testa, Gloria Barcaroli, Veronica Rizzo, Giuliana Moffa and Federica Pediconi
Diagnostics 2025, 15(13), 1631; https://doi.org/10.3390/diagnostics15131631 - 26 Jun 2025
Viewed by 404
Abstract
Background: Digital breast tomosynthesis (DBT) is a valuable imaging modality for breast cancer detection; however, its interpretation remains time-consuming and subject to inter-reader variability. This study aimed to develop and evaluate two deep learning (DL) models based on transfer learning for the [...] Read more.
Background: Digital breast tomosynthesis (DBT) is a valuable imaging modality for breast cancer detection; however, its interpretation remains time-consuming and subject to inter-reader variability. This study aimed to develop and evaluate two deep learning (DL) models based on transfer learning for the binary classification of breast lesions (benign vs. malignant) using DBT images to support clinical decision-making and risk stratification. Methods: In this retrospective monocentric study, 184 patients with histologically or clinically confirmed benign (107 cases, 41.8%) or malignant (77 cases, 58.2%) breast lesions were included. Each case underwent DBT with a single lesion manually segmented for radiomic analysis. Two convolutional neural network (CNN) architectures—ResNet50 and DenseNet201—were trained using transfer learning from ImageNet weights. A 10-fold cross-validation strategy with ensemble voting was applied. Model performance was evaluated through ROC–AUC, accuracy, sensitivity, specificity, PPV, and NPV. Results: The ResNet50 model outperformed DenseNet201 across most metrics. On the internal testing set, ResNet50 achieved a ROC–AUC of 63%, accuracy of 60%, sensitivity of 39%, and specificity of 75%. The DenseNet201 model yielded a lower ROC–AUC of 55%, accuracy of 55%, and sensitivity of 24%. Both models demonstrated relatively high specificity, indicating potential utility in ruling out malignancy, though sensitivity remained suboptimal. Conclusions: This study demonstrates the feasibility of using transfer learning-based DL models for lesion classification on DBT. While the overall performance was moderate, the results highlight both the potential and current limitations of AI in breast imaging. Further studies and approaches are warranted to enhance model robustness and clinical applicability. Full article
Show Figures

Figure 1

14 pages, 3504 KiB  
Article
Multimodal Deep Learning for Stage Classification of Head and Neck Cancer Using Masked Autoencoders and Vision Transformers with Attention-Based Fusion
by Anas Turki, Ossama Alshabrawy and Wai Lok Woo
Cancers 2025, 17(13), 2115; https://doi.org/10.3390/cancers17132115 - 24 Jun 2025
Viewed by 466
Abstract
Head and neck squamous cell carcinoma (HNSCC) is a prevalent and aggressive cancer, and accurate staging using the AJCC system is essential for treatment planning. This study aims to enhance AJCC staging by integrating both clinical and imaging data using a multimodal deep [...] Read more.
Head and neck squamous cell carcinoma (HNSCC) is a prevalent and aggressive cancer, and accurate staging using the AJCC system is essential for treatment planning. This study aims to enhance AJCC staging by integrating both clinical and imaging data using a multimodal deep learning pipeline. We propose a framework that employs a VGG16-based masked autoencoder (MAE) for self-supervised visual feature learning, enhanced by attention mechanisms (CBAM and BAM), and fuses image and clinical features using an attention-weighted fusion network. The models, benchmarked on the HNSCC and HN1 datasets, achieved approximately 80% accuracy (four classes) and ~66% accuracy (five classes), with notable AUC improvements, especially under BAM. The integration of clinical features significantly enhances stage-classification performance, setting a precedent for robust multimodal pipelines in radiomics-based oncology applications. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

13 pages, 2699 KiB  
Article
Development of AI-Based Predictive Models for Osteoporosis Diagnosis in Postmenopausal Women from Panoramic Radiographs
by Francesco Fanelli, Giuseppe Guglielmi, Giuseppe Troiano, Federico Rivara, Giovanni Passeri, Gianluca Prencipe, Khrystyna Zhurakivska, Riccardo Guglielmi and Elena Calciolari
J. Clin. Med. 2025, 14(13), 4462; https://doi.org/10.3390/jcm14134462 - 23 Jun 2025
Viewed by 453
Abstract
Objectives: The aim of this study was to develop AI-based predictive models to assess the risk of osteoporosis in postmenopausal women using panoramic radiographs (OPTs). Methods: A total of 301 panoramic radiographs (OPTs) from postmenopausal women were collected and labeled based [...] Read more.
Objectives: The aim of this study was to develop AI-based predictive models to assess the risk of osteoporosis in postmenopausal women using panoramic radiographs (OPTs). Methods: A total of 301 panoramic radiographs (OPTs) from postmenopausal women were collected and labeled based on DXA-assessed bone mineral density. Of these, 245 OPTs from the Hospital of San Giovanni Rotondo were used for model training and internal testing, while 56 OPTs from the University of Parma served as an external validation set. A mandibular region of interest (ROI) was defined on each image. Predictive models were developed using classical radiomics, deep radiomics, and convolutional neural networks (CNNs), evaluated based on AUC, accuracy, sensitivity, and specificity. Results: Among the tested approaches, classical radiomics showed limited predictive ability (AUC = 0.514), whereas deep radiomics using DenseNet-121 features combined with logistic regression achieved the best performance in this group (AUC = 0.722). For end-to-end CNNs, ResNet-50 using a hybrid feature extraction strategy achieved the highest AUC in external validation (AUC = 0.786), with a sensitivity of 90.5%. While internal testing yielded high performance metrics, external validation revealed reduced generalizability, highlighting the challenges of translating AI models into clinical practice. Conclusions: AI-based models show potential for opportunistic osteoporosis screening from OPT images. Although the results are promising, particularly those obtained with deep radiomics and transfer learning strategies, further refinement and validation in larger and more diverse populations are essential before clinical application. These models could support the early, non-invasive identification of at-risk patients, complementing current diagnostic pathways. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

16 pages, 1443 KiB  
Article
Radiodosiomics Prediction of Treatment Failures Prior to Chemoradiotherapy in Head-and-Neck Squamous Cell Carcinoma
by Hidemi Kamezawa and Hidetaka Arimura
Appl. Sci. 2025, 15(12), 6941; https://doi.org/10.3390/app15126941 - 19 Jun 2025
Viewed by 257
Abstract
Predicting treatment failure (TF) in head-and-neck squamous cell carcinoma (HNSCC) patients before treatment can help in selecting a more appropriate treatment approach. We investigated a novel radiodosiomics approach to predict TF prior to chemoradiation in HNSCC patients. Computed tomography (CT) images, dose distributions [...] Read more.
Predicting treatment failure (TF) in head-and-neck squamous cell carcinoma (HNSCC) patients before treatment can help in selecting a more appropriate treatment approach. We investigated a novel radiodosiomics approach to predict TF prior to chemoradiation in HNSCC patients. Computed tomography (CT) images, dose distributions (DDs), and clinical data from 172 cases were collected from a public database. The cases were divided into the training (n = 140) and testing (n = 32) datasets. A total of 1027 features, including conventional radiomic (R) features, local binary pattern-based (L) features, and topological (T) features, were extracted from the CT images and DDs of the tumor region. Moreover, deep (D) features were extracted from a deep learning-based prediction model. The Coxnet algorithm was employed to select significant features. Twenty-two treatment failure prediction models were constructed based on Rad-scores. TF prediction models were assessed using the concordance index (C-index) and statistically significant variations in the Kaplan–Meier curves between the two risk groups. The Kaplan–Meier curves of the DD-based T (DD-T) model displayed statistically significant differences. The highest C-index of the testing dataset for this model was 0.760. The proposed radiodosiomics models could potentially demonstrate greater accuracy in anticipating TF before chemoradiation in HNSCC patients. Full article
(This article belongs to the Special Issue Novel Technologies in Radiology: Diagnosis, Prediction and Treatment)
Show Figures

Figure 1

20 pages, 1771 KiB  
Article
An Innovative Artificial Intelligence Classification Model for Non-Ischemic Cardiomyopathy Utilizing Cardiac Biomechanics Derived from Magnetic Resonance Imaging
by Liqiang Fu, Peifang Zhang, Liuquan Cheng, Peng Zhi, Jiayu Xu, Xiaolei Liu, Yang Zhang, Ziwen Xu and Kunlun He
Bioengineering 2025, 12(6), 670; https://doi.org/10.3390/bioengineering12060670 - 19 Jun 2025
Viewed by 577
Abstract
Significant challenges persist in diagnosing non-ischemic cardiomyopathies (NICMs) owing to early morphological overlap and subtle functional changes. While cardiac magnetic resonance (CMR) offers gold-standard structural assessment, current morphology-based AI models frequently overlook key biomechanical dysfunctions like diastolic/systolic abnormalities. To address this, we propose [...] Read more.
Significant challenges persist in diagnosing non-ischemic cardiomyopathies (NICMs) owing to early morphological overlap and subtle functional changes. While cardiac magnetic resonance (CMR) offers gold-standard structural assessment, current morphology-based AI models frequently overlook key biomechanical dysfunctions like diastolic/systolic abnormalities. To address this, we propose a dual-path hybrid deep learning framework based on CNN-LSTM and MLP, integrating anatomical features from cine CMR with biomechanical markers derived from intraventricular pressure gradients (IVPGs), significantly enhancing NICM subtype classification by capturing subtle biomechanical dysfunctions overlooked by traditional morphological models. Our dual-path architecture combines a CNN-LSTM encoder for cine CMR analysis and an MLP encoder for IVPG time-series data, followed by feature fusion and dense classification layers. Trained on a multicenter dataset of 1196 patients and externally validated on 137 patients from a distinct institution, the model achieved a superior performance (internal AUC: 0.974; external AUC: 0.962), outperforming ResNet50, VGG16, and radiomics-based SVM. Ablation studies confirmed IVPGs’ significant contribution, while gradient saliency and gradient-weighted class activation mapping (Grad-CAM) visualizations proved the model pays attention to physiologically relevant cardiac regions and phases. The framework maintained robust generalizability across imaging protocols and institutions with minimal performance degradation. By synergizing biomechanical insights with deep learning, our approach offers an interpretable, data-efficient solution for early NICM detection and subtype differentiation, holding strong translational potential for clinical practice. Full article
(This article belongs to the Special Issue Bioengineering in a Generative AI World)
Show Figures

Figure 1

27 pages, 7939 KiB  
Article
ReAcc_MF: Multimodal Fusion Model with Resource-Accuracy Co-Optimization for Screening Blasting-Induced Pulmonary Nodules in Occupational Health
by Junhao Jia, Qian Jia, Jianmin Zhang, Meilin Zheng, Junze Fu, Jinshan Sun, Zhongyuan Lai and Dan Gui
Appl. Sci. 2025, 15(11), 6224; https://doi.org/10.3390/app15116224 - 31 May 2025
Viewed by 586
Abstract
Occupational health monitoring in demolition environments requires precise detection of blast-dust-induced pulmonary pathologies. However, it is often hindered by challenges such as contaminated imaging biomarkers, limited access to medical resources in mining areas, and opaque AI-based diagnostic models. This study presents a novel [...] Read more.
Occupational health monitoring in demolition environments requires precise detection of blast-dust-induced pulmonary pathologies. However, it is often hindered by challenges such as contaminated imaging biomarkers, limited access to medical resources in mining areas, and opaque AI-based diagnostic models. This study presents a novel computational framework that combines industrial-grade robustness with clinical interpretability for the diagnosis of pulmonary nodules. We propose a hybrid framework that integrates morphological purification techniques (multi-step filling and convex hull operations) with multi-dimensional features fusion (radiomics + lightweight deep features). To enhance computational efficiency and interpretability, we design a soft voting ensemble classifier, eliminating the need for complex deep learning architectures. On the LIDC-IDRI dataset, our model achieved an AUC of 0.99 and an accuracy of 0.97 using standard clinical-grade hardware, outperforming state-of-the-art (SOTA) methods while requiring fewer computational resources. Ablation studies, feature weight maps, and normalized mutual information heatmaps confirm the robustness and interpretability of the model, while uncertainty quantification metrics such as the Brier score and Expected Calibration Error (ECE) better validate the model’s clinical applicability and prediction stability. This approach effectively achieves resource-accuracy co-optimization, maintaining low computational costs, and is highly suitable for resource-constrained clinical environments. The modular design of our framework also facilitates extensions to other medical imaging domains without the need for high-end infrastructure. Full article
Show Figures

Figure 1

20 pages, 578 KiB  
Review
Harnessing Artificial Intelligence in Pediatric Oncology Diagnosis and Treatment: A Review
by Mubashir Hassan, Saba Shahzadi and Andrzej Kloczkowski
Cancers 2025, 17(11), 1828; https://doi.org/10.3390/cancers17111828 - 30 May 2025
Viewed by 1038
Abstract
Artificial intelligence (AI) is rapidly transforming pediatric oncology by creating new means to improve the accuracy and efficacy of cancer diagnosis and treatment in children. This review critically examines current applications of AI technologies like machine learning (ML) and deep learning (DL) to [...] Read more.
Artificial intelligence (AI) is rapidly transforming pediatric oncology by creating new means to improve the accuracy and efficacy of cancer diagnosis and treatment in children. This review critically examines current applications of AI technologies like machine learning (ML) and deep learning (DL) to the main types of pediatric cancers. However, the application of AI to pediatric oncology is prone to certain challenges, including the heterogeneity and rarity of pediatric cancer data, rapid technological development in imaging, and ethical concerns pertaining to data privacy and algorithmic transparency. Collaborative efforts and data-sharing schemes are important to surpass these challenges and facilitate effective training of AI models. This review also points to emerging trends, including AI-based radiomics and proteomics applications, and provides future directions to realize the full potential of AI in pediatric oncology. Finally, AI is a promising paradigm shift toward precision medicine in childhood cancer treatment, which can enhance the survival rates and quality of life for pediatric patients. Full article
(This article belongs to the Section Cancer Informatics and Big Data)
Show Figures

Figure 1

9 pages, 468 KiB  
Review
Artificial Intelligence and Novel Technologies for the Diagnosis of Upper Tract Urothelial Carcinoma
by Nikolaos Kostakopoulos, Vasileios Argyropoulos, Themistoklis Bellos, Stamatios Katsimperis and Athanasios Kostakopoulos
Medicina 2025, 61(5), 923; https://doi.org/10.3390/medicina61050923 - 20 May 2025
Viewed by 606
Abstract
Background and Objectives: Upper tract urothelial carcinoma (UTUC) is one of the most underdiagnosed but, at the same time, one of the most lethal cancers. In this review article, we investigated the application of artificial intelligence and novel technologies in the prompt [...] Read more.
Background and Objectives: Upper tract urothelial carcinoma (UTUC) is one of the most underdiagnosed but, at the same time, one of the most lethal cancers. In this review article, we investigated the application of artificial intelligence and novel technologies in the prompt identification of high-grade UTUC to prevent metastases and facilitate timely treatment. Materials and Methods: We conducted an extensive search of the literature from the Pubmed, Google scholar and Cochrane library databases for studies investigating the application of artificial intelligence for the diagnosis of UTUC, according to the PRISMA guidelines. After the exclusion of non-associated and non-English studies, we included 12 articles in our review. Results: Artificial intelligence systems have been shown to enhance post-radical nephroureterectomy urine cytology reporting, in order to facilitate the early diagnosis of bladder recurrence, as well as improve diagnostic accuracy in atypical cells, by being trained on annotated cytology images. In addition to this, by extracting textural radiomics features from data from computed tomography urograms, we can develop machine learning models to predict UTUC tumour grade and stage in small-size and especially high-grade tumours. Random forest models have been shown to have the best performance in predicting high-grade UTUC, while hydronephrosis is the most significant independent factor for high-grade tumours. ChatGPT, although not mature enough to provide information on diagnosis and treatment, can assist in improving patients’ understanding of the disease’s epidemiology and risk factors. Computer vision models, in real time, can augment visualisation during endoscopic ureteral tumour diagnosis and ablation. A deep learning workflow can also be applied in histopathological slides to predict UTUC protein-based subtypes. Conclusions: Artificial intelligence has been shown to greatly facilitate the timely diagnosis of high-grade UTUC by improving the diagnostic accuracy of urine cytology, CT Urograms and ureteroscopy visualisation. Deep learning systems can become a useful and easily accessible tool in physicians’ armamentarium to deal with diagnostic uncertainties in urothelial cancer. Full article
(This article belongs to the Section Urology & Nephrology)
Show Figures

Figure 1

12 pages, 1844 KiB  
Article
Lymph Node Involvement Prediction Using Machine Learning: Analysis of Prostatic Nodule, Prostatic Gland, and Periprostatic Adipose Tissue (PPAT)
by Eliodoro Faiella, Giulia D’amone, Raffaele Ragone, Matteo Pileri, Elva Vergantino, Bruno Beomonte Zobel, Rosario Francesco Grasso and Domiziana Santucci
Appl. Sci. 2025, 15(10), 5426; https://doi.org/10.3390/app15105426 - 13 May 2025
Viewed by 419
Abstract
Background: Prostate cancer is a major cause of cancer-related mortality among men, with approximately 15% of newly diagnosed patients having pelvic lymph node metastasis (PLNM). For this reason, PLNM identification before localized PCa treatment would significantly impact treatment planning, clinical judgment, and patient [...] Read more.
Background: Prostate cancer is a major cause of cancer-related mortality among men, with approximately 15% of newly diagnosed patients having pelvic lymph node metastasis (PLNM). For this reason, PLNM identification before localized PCa treatment would significantly impact treatment planning, clinical judgment, and patient outcome prediction. Radiomics has gained popularity for its ability to predict tumor behavior and prognosis without invasive procedures. Magnetic resonance imaging (MRI) is widely used in radiomic workups, particularly for prostate cancer. This study aims to predict lymph node invasion in prostate cancer patients using clinical information and mp-MRI radiomics features extracted from the suspicious nodule, prostate gland, and periprostatic adipose tissue (PPAT). Methods: A retrospective review of 85 patients who underwent mp-MRI at our radiology department between 2016 and 2022 was conducted. This study included patients who underwent prostatectomy and lymphadenectomy with complete histological examination and previous staging mp-MRI and were divided into two groups based on lymph node status (positive/negative). Data were collected from each patient, including clinical information, radiomics, and semantic data (such as tumor MRI characteristics, histological tumor details, and lymph node status (LNS)). MRI exams were conducted using a 1.5-T system and were used to study the prostate gland. A three-year resident manually segmented the prostate nodule, prostatic gland, and periprostatic tissue using an open-source segmentation program. A random forest (RF) machine learning model was developed and tested using Chat-GPT version 4.0 software. The model’s performance in predicting LNS was assessed using accuracy, precision, recall, F1 score, and area under the curve (AUC) receiver operating characteristic (ROC), with sensitivity and specificity evaluated using DeLong’s test. Results: Random forest demonstrated the best performance in prediction considering features extracted from DWI nodules (67% of accuracy, 0.83 AUC), from T2 fat (78% of accuracy, 0.86 AUC), and from T2 glands (78% of accuracy, 0.97 AUC). The combination of the three sequences in the nodule evaluation was more accurate compared with the single sequences (88%). Combining all the nodule features with gland and PPAT features, an accuracy of 89% with AUC near 1 was obtained. Compared with the analysis of the nodule and the PPAT, the whole-gland evaluation had the best performance (p ≤ 0.05) in predicting LNS when compared with the nodule. Conclusions: Precise nodal staging is essential for PCa patients’ prognosis and therapeutic strategy. When compared with a radiologist’s assessment, radiomics models enhance the diagnostic accuracy of lymph node staging for prostate cancer. Although data are still lacking, deep learning models may be able to further improve on this. Full article
(This article belongs to the Special Issue Advances in Diagnostic Radiology)
Show Figures

Figure 1

Back to TopTop