Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (154)

Search Parameters:
Keywords = automatic treatment planning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 615 KiB  
Article
Translating SGRT from Breast to Lung Cancer: A Study on Frameless Immobilization and Real-Time Monitoring Efficacy, Focusing on Setup Accuracy
by Jang Bo Shim, Hakyoung Kim, Sun Myung Kim and Dae Sik Yang
Life 2025, 15(8), 1234; https://doi.org/10.3390/life15081234 - 4 Aug 2025
Viewed by 75
Abstract
Objectives: Surface-Guided Radiation Therapy (SGRT) has been widely adopted in breast cancer radiotherapy, particularly for improving setup accuracy and motion management. Recently, its application in lung cancer has attracted growing interest due to similar needs for precision. This study investigates the feasibility and [...] Read more.
Objectives: Surface-Guided Radiation Therapy (SGRT) has been widely adopted in breast cancer radiotherapy, particularly for improving setup accuracy and motion management. Recently, its application in lung cancer has attracted growing interest due to similar needs for precision. This study investigates the feasibility and clinical utility of SGRT in lung cancer treatment, focusing on its effectiveness in patient setup and real-time motion monitoring under frameless immobilization conditions. Materials and Methods: A total of 204 treatment records from 17 patients with primary lung cancer who underwent radiotherapy at Korea University Guro Hospital between October 2024 and April 2025 were retrospectively analyzed. Patients were initially positioned using the Identify system (Varian) in the CT suite, with surface data transferred to the treatment room system. Alignment was performed to within ±1 cm and ±2° across six degrees of freedom. Cone-beam CT (CBCT) was acquired prior to treatment for verification, and treatment commenced when the Distance to Correspondence Surface (DCS) was ≤0.90. Setup deviations from the Identify system were recorded and compared with CBCT in three translational axes to evaluate positioning accuracy and PTV displacement. Results and Conclusions: The Identify system was shown to provide high setup accuracy and reliable real-time motion monitoring in lung cancer radiotherapy. Its ability to detect patient movement and automatically interrupt beam delivery contributes to enhanced treatment safety and precision. In addition, even though the maximum longitudinal (Lng) shift reached up to −1.83 cm with surface-guided setup, and up to 1.78 cm (Lat) 5.26 cm (Lng), 9.16 cm (Vrt) with CBCT-based verification, the use of Identify’s auto-interruption mode (±1 cm in translational axes, ±2° in rotational axes) allowed treatment delivery with PTV motion constrained within ±0.02 cm. These results suggest that, due to significant motion in the longitudinal direction, appropriate PTV margins should be considered during treatment planning. The Identify system enhances setup accuracy in lung cancer patients using a surface-guided approach and enables real-time tracking of intra-fractional errors. SGRT, when implemented with systems such as Identify, shows promise as a feasible alternative or complement to conventional IGRT in selected lung cancer cases. Further studies with larger patient cohorts and diverse clinical settings are warranted to validate these findings. Full article
(This article belongs to the Special Issue Current Advances in Lung Cancer Diagnosis and Treatment)
Show Figures

Figure 1

16 pages, 2557 KiB  
Article
Explainable AI for Oral Cancer Diagnosis: Multiclass Classification of Histopathology Images and Grad-CAM Visualization
by Jelena Štifanić, Daniel Štifanić, Nikola Anđelić and Zlatan Car
Biology 2025, 14(8), 909; https://doi.org/10.3390/biology14080909 - 22 Jul 2025
Viewed by 352
Abstract
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and [...] Read more.
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and intra-observer variability, artificial intelligence algorithms are often used as computational aids in tumor classification and diagnosis. This research proposes a two-step approach for automatic multiclass grading using oral histopathology images (the first step) and Grad-CAM visualization (the second step) to assist clinicians in diagnosing oral squamous cell carcinoma. The Xception architecture achieved the highest classification values of 0.929 (±σ = 0.087) AUCmacro and 0.942 (±σ = 0.074) AUCmicro. Additionally, Grad-CAM provided visual explanations of the model’s predictions by highlighting the precise areas of histopathology images that influenced the model’s decision. These results emphasize the potential of integrated AI algorithms in medical diagnostics, offering a more precise, dependable, and effective method for disease analysis. Full article
Show Figures

Figure 1

19 pages, 3923 KiB  
Article
Automated Aneurysm Boundary Detection and Volume Estimation Using Deep Learning
by Alireza Bagheri Rajeoni, Breanna Pederson, Susan M. Lessner and Homayoun Valafar
Diagnostics 2025, 15(14), 1804; https://doi.org/10.3390/diagnostics15141804 - 17 Jul 2025
Viewed by 316
Abstract
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard [...] Read more.
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard of care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Method: Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Results: Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. Conclusions: This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

13 pages, 1014 KiB  
Article
Discrete Wavelet Transform-Based Data Fusion with ResUNet Model for Liver Tumor Segmentation
by Ümran Şeker Ertuğrul and Halife Kodaz
Electronics 2025, 14(13), 2589; https://doi.org/10.3390/electronics14132589 - 27 Jun 2025
Viewed by 437
Abstract
Liver tumors negatively affect vital functions such as digestion and nutrient storage, significantly reducing patients’ quality of life. Therefore, early detection and accurate treatment planning are of great importance. This study aims to support physicians by automatically identifying the type and location of [...] Read more.
Liver tumors negatively affect vital functions such as digestion and nutrient storage, significantly reducing patients’ quality of life. Therefore, early detection and accurate treatment planning are of great importance. This study aims to support physicians by automatically identifying the type and location of tumors, enabling rapid diagnosis and treatment. The segmentation process was carried out using deep learning methods based on artificial intelligence, particularly the U-Net architecture, which is designed for biomedical imaging. U-Net was modified by adding residual blocks, resulting in a deeper architecture called ResUNet. Due to the limited availability of medical data, both normal data fusion and discrete wavelet transform (DWT) methods were applied during the data preprocessing phase. A total of 131 liver tumor images, resized to 120 × 120 pixels, were analyzed. The DWT-based fusion method achieved more successful results, with a dice coefficient of 94.45%. This study demonstrates the effectiveness of artificial intelligence-supported approaches in liver tumor segmentation and suggests that such applications will become more widely used in the medical field in the future. Full article
Show Figures

Figure 1

20 pages, 1669 KiB  
Article
Automated Pneumothorax Segmentation with a Spatial Prior Contrast Adapter
by Yiming Jia and Essam A. Rashed
Appl. Sci. 2025, 15(12), 6598; https://doi.org/10.3390/app15126598 - 12 Jun 2025
Viewed by 498
Abstract
Pneumothorax is a critical condition that requires rapid and accurate diagnosis from standard chest radiographs. Identifying and segmenting the location of the pneumothorax are essential for developing an effective treatment plan. nnUNet is a self-configuring, deep learning-based framework for medical image segmentation. Despite [...] Read more.
Pneumothorax is a critical condition that requires rapid and accurate diagnosis from standard chest radiographs. Identifying and segmenting the location of the pneumothorax are essential for developing an effective treatment plan. nnUNet is a self-configuring, deep learning-based framework for medical image segmentation. Despite adjusting its parameters automatically through data-driven optimization strategies and offering robust feature extraction and segmentation capabilities across diverse datasets, our initial experiments revealed that nnUNet alone struggled to achieve consistently accurate segmentation for pneumothorax, particularly in challenging scenarios where subtle intensity variations and anatomical noise obscure the target regions. This study aims to enhance the accuracy and robustness of pneumothorax segmentation in low-contrast chest radiographs by integrating spatial prior information and attention mechanism into the nnUNet framework. In this study, we introduce the spatial prior contrast adapter (SPCA)-enhanced nnUNet by implementing two modules. First, we integrate an SPCA utilizing the MedSAM foundation model to incorporate spatial prior information of the lung region, effectively guiding the segmentation network to focus on anatomically relevant areas. In the meantime, a probabilistic atlas, which shows the probability of an area prone to pneumothorax, is generated based on the ground truth masks. Both the lung segmentation results and the probabilistic atlas are used as attention maps in nnUNet. Second, we combine the two attention maps as additional input into nnUNet and integrate an attention mechanism into standard nnUNet by using a convolutional block attention module (CBAM). We validate our method by experimenting on the dataset CANDID-PTX, a benchmark dataset representing 19,237 chest radiographs. By introducing spatial awareness and intensity adjustments, the model reduces false positives and improves the precision of boundary delineations, ultimately overcoming many of the limitations associated with low-contrast radiographs. Compared with standard nnUNet, SPCA-enhanced nnUNet achieves an average Dice coefficient of 0.81, which indicates an improvement of standard nnUNet by 15%. This study provides a novel approach toward enhancing the segmentation performance of pneumothorax with low contrast in chest X-ray radiographs. Full article
(This article belongs to the Special Issue Applications of Computer Vision and Image Processing in Medicine)
Show Figures

Figure 1

11 pages, 2749 KiB  
Article
The Validation of an Artificial Intelligence-Based Software for the Detection and Numbering of Primary Teeth on Panoramic Radiographs
by Heba H. Bakhsh, Dur Alomair, Nada Ahmed AlShehri, Alia U. Alturki, Eman Allam and Sara M. ElKhateeb
Diagnostics 2025, 15(12), 1489; https://doi.org/10.3390/diagnostics15121489 - 11 Jun 2025
Viewed by 434
Abstract
Background: Dental radiographs play a crucial role in diagnosis and treatment planning. With the rise in digital imaging, there is growing interest in leveraging artificial intelligence (AI) to support clinical decision-making. AI technologies can enhance diagnostic accuracy by automating tasks like identifying [...] Read more.
Background: Dental radiographs play a crucial role in diagnosis and treatment planning. With the rise in digital imaging, there is growing interest in leveraging artificial intelligence (AI) to support clinical decision-making. AI technologies can enhance diagnostic accuracy by automating tasks like identifying and locating dental structures. The aim of the current study was to assess and validate the accuracy of an AI-powered application in the detection and numbering of primary teeth on panoramic radiographs. Methods: This study examined 598 archived panoramic radiographs of subjects aged 4–14 years old. Images with poor diagnostic quality were excluded. Three experienced clinicians independently assessed each image to establish the ground truth for primary teeth identification. The same radiographs were then evaluated using EM2AI, an AI-based diagnostic software for the automatic detection and numbering of primary teeth. The AI’s performance was assessed by comparing its output to the ground truth using sensitivity, specificity, predictive values, accuracy, and the Kappa coefficient. Results: EM2AI demonstrated high overall performance in detecting and numbering primary teeth in mixed dentition, with an accuracy of 0.98, a sensitivity of 0.97, a specificity of 0.99, and a Kappa coefficient of 0.96. Detection accuracy for individual teeth ranged from 0.96 to 0.99. The highest sensitivity (0.99) was observed in detecting upper right canines and primary molars, while the lowest sensitivity (0.79–0.85) occurred in detecting lower incisors and the upper left first molar. Conclusions: The AI module demonstrated high accuracy in the automatic detection of primary teeth presence and numbering in panoramic images, with performance metrics exceeding 90%. With further validation, such systems could support automated dental charting, improve electronic dental records, and aid clinical decision-making. Full article
Show Figures

Figure 1

26 pages, 12177 KiB  
Article
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Viewed by 630
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study [...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

15 pages, 2549 KiB  
Article
Automated Implementation of the Edinburgh Visual Gait Score (EVGS)
by Ishaasamyuktha Somasundaram, Albert Tu, Ramiro Olleac, Natalie Baddour and Edward D. Lemaire
Sensors 2025, 25(10), 3226; https://doi.org/10.3390/s25103226 - 21 May 2025
Viewed by 665
Abstract
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To [...] Read more.
The Edinburgh Visual Gait Score (EVGS) is a commonly used clinical scale for assessing gait abnormalities, providing insight into diagnosis and treatment planning. However, its manual implementation is resource-intensive and requires time, expertise, and a controlled environment for video recording and analysis. To address these issues, an automated approach for scoring the EVGS was developed. Unlike past methods dependent on controlled environments or simulated videos, the proposed approach integrates pose estimation with new algorithms to handle operational challenges present in the dataset, such as minor camera movement during sagittal recordings, slight zoom variations in coronal views, and partial visibility (e.g., missing head) in some videos. The system uses OpenPose for pose estimation and new algorithms for automatic gait event detection, stride segmentation, and computation of the 17 EVGS parameters across the sagittal and coronal planes. Evaluation of gait videos of patients with cerebral palsy showed high accuracy for parameters such as hip and knee flexion but a need for improvement in pelvic rotation and hindfoot alignment scoring. This automated EVGS approach can minimize the workload for clinicians through the introduction of automated, rapid gait analysis and enable mobile-based applications for clinical decision-making. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

26 pages, 5404 KiB  
Article
Real-Time Coronary Artery Dominance Classification from Angiographic Images Using Advanced Deep Video Architectures
by Hasan Ali Akyürek
Diagnostics 2025, 15(10), 1186; https://doi.org/10.3390/diagnostics15101186 - 8 May 2025
Viewed by 728
Abstract
Background/Objectives: The automatic identification of coronary artery dominance holds critical importance for clinical decision-making in cardiovascular medicine, influencing diagnosis, treatment planning, and risk stratification. Traditional classification methods rely on the manual visual interpretation of coronary angiograms. However, current deep learning approaches typically [...] Read more.
Background/Objectives: The automatic identification of coronary artery dominance holds critical importance for clinical decision-making in cardiovascular medicine, influencing diagnosis, treatment planning, and risk stratification. Traditional classification methods rely on the manual visual interpretation of coronary angiograms. However, current deep learning approaches typically classify right and left coronary artery angiograms separately. This study aims to develop and evaluate an integrated video-based deep learning framework for classifying coronary dominance without distinguishing between RCA and LCA angiograms. Methods: Three advanced video-based deep learning models—Temporal Segment Networks (TSNs), Video Swin Transformer (VST), and VideoMAEv2—were implemented using the MMAction2 framework. These models were trained and evaluated on a large dataset derived from a publicly available source. The integrated approach processes entire angiographic video sequences, eliminating the need for separate RCA and LCA identification during preprocessing. Results: The proposed framework demonstrated strong performance in classifying coronary dominance. The best test accuracies achieved using TSNs, Video Swin Transformer, and VideoMAEv2 were 87.86%, 92.12%, and 92.89%, respectively. Transformer-based models showed superior accuracy compared to convolution-based methods, highlighting their effectiveness in capturing spatial–temporal patterns in angiographic videos. Conclusions: This study introduces a unified video-based deep learning approach for coronary dominance classification, eliminating manual arterial branch separation and reducing preprocessing complexity. The results indicate that transformer-based models, particularly VideoMAEv2, offer highly accurate and clinically feasible solutions, contributing to the development of objective and automated diagnostic tools in cardiovascular imaging. Full article
(This article belongs to the Special Issue Cardiovascular Imaging)
Show Figures

Figure 1

34 pages, 15537 KiB  
Article
Explainable Artificial Intelligence for Diagnosis and Staging of Liver Cirrhosis Using Stacked Ensemble and Multi-Task Learning
by Serkan Savaş
Diagnostics 2025, 15(9), 1177; https://doi.org/10.3390/diagnostics15091177 - 6 May 2025
Viewed by 1358
Abstract
Background/Objectives: Liver cirrhosis is a critical chronic condition with increasing global mortality and morbidity rates, emphasizing the necessity for early and accurate diagnosis. This study proposes a comprehensive deep-learning framework for the automatic diagnosis and staging of liver cirrhosis using T2-weighted MRI [...] Read more.
Background/Objectives: Liver cirrhosis is a critical chronic condition with increasing global mortality and morbidity rates, emphasizing the necessity for early and accurate diagnosis. This study proposes a comprehensive deep-learning framework for the automatic diagnosis and staging of liver cirrhosis using T2-weighted MRI images. Methods: The methodology integrates stacked ensemble learning, multi-task learning (MTL), and transfer learning within an explainable artificial intelligence (XAI) context to improve diagnostic accuracy, reliability, and transparency. A hybrid model combining multiple pre-trained convolutional neural networks (VGG16, MobileNet, and DenseNet121) with XGBoost as a meta-classifier demonstrated robust performance in binary classification between healthy and cirrhotic cases. Results: The model achieved a mean accuracy of 96.92%, precision of 95.12%, recall of 98.93%, and F1-score of 96.98% across 10-fold cross-validation. For staging (mild, moderate, and severe), the MTL framework reached a main task accuracy of 96.71% and an average AUC of 99.81%, with a powerful performance in identifying severe cases. Grad-CAM visualizations reveal class-specific activation regions, enhancing the transparency and trust in the model’s decision-making. The proposed system was validated using the CirrMRI600+ dataset with a 10-fold cross-validation strategy, achieving high accuracy (AUC: 99.7%) and consistent results across folds. Conclusions: This research not only advances State-of-the-Art diagnostic methods but also addresses the black-box nature of deep learning in clinical applications. The framework offers potential as a decision-support system for radiologists, contributing to early detection, effective staging, personalized treatment planning, and better-informed treatment planning for liver cirrhosis. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

28 pages, 8613 KiB  
Article
Real-Time Detection of Meningiomas by Image Segmentation: A Very Deep Transfer Learning Convolutional Neural Network Approach
by Debasmita Das, Chayna Sarkar and Biswadeep Das
Tomography 2025, 11(5), 50; https://doi.org/10.3390/tomography11050050 - 24 Apr 2025
Cited by 1 | Viewed by 1325
Abstract
Background/Objectives: Developing a treatment strategy that effectively prolongs the lives of people with brain tumors requires an accurate diagnosis of the condition. Therefore, improving the preoperative classification of meningiomas is a priority. Machine learning (ML) has made great strides thanks to the development [...] Read more.
Background/Objectives: Developing a treatment strategy that effectively prolongs the lives of people with brain tumors requires an accurate diagnosis of the condition. Therefore, improving the preoperative classification of meningiomas is a priority. Machine learning (ML) has made great strides thanks to the development of convolutional neural networks (CNNs) and computer-aided tumor detection systems. The deep convolutional layers automatically extract important and dependable information from the input space, in contrast to more traditional neural network layers. One recent and promising advancement in this field is ML. Still, there is a dearth of studies being carried out in this area. Methods: Therefore, starting with the analysis of magnetic resonance images, we have suggested in this research work a tried-and-tested and methodical strategy for real-time meningioma diagnosis by image segmentation using a very deep transfer learning CNN model or DNN model (VGG-16) with CUDA. Since the VGGNet CNN model has a greater level of accuracy than other deep CNN models like AlexNet, GoogleNet, etc., we have chosen to employ it. The VGG network that we have constructed with very small convolutional filters consists of 13 convolutional layers and 3 fully connected layers. Our VGGNet model takes in an sMRI FLAIR image input. The VGG’s convolutional layers leverage a minimal receptive field, i.e., 3 × 3, the smallest possible size that still captures up/down and left/right. Moreover, there are also 1 × 1 convolution filters acting as a linear transformation of the input. This is followed by a ReLU unit. The convolution stride is fixed at 1 pixel to keep the spatial resolution preserved after convolution. All the hidden layers in our VGG network also use ReLU. A dataset consisting of 264 3D FLAIR sMRI image segments from three different classes (meningioma, tuberculoma, and normal) was employed. The number of epochs in the Sequential Model was set to 10. The Keras layers that we used were Dense, Dropout, Flatten, Batch Normalization, and ReLU. Results: According to the simulation findings, our suggested model successfully classified all of the data in the dataset used, with a 99.0% overall accuracy. The performance metrics of the implemented model and confusion matrix for tumor classification indicate the model’s high accuracy in brain tumor classification. Conclusions: The good outcomes demonstrate the possibility of our suggested method as a useful diagnostic tool, promoting better understanding, a prognostic tool for clinical outcomes, and an efficient brain tumor treatment planning tool. It was demonstrated that several performance metrics we computed using the confusion matrix of the previously used model were very good. Consequently, we think that the approach we have suggested is an important way to identify brain tumors. Full article
Show Figures

Figure 1

18 pages, 5279 KiB  
Article
Optimization-Incorporated Deep Learning Strategy to Automate L3 Slice Detection and Abdominal Segmentation in Computed Tomography
by Seungheon Chae, Seongwon Chae, Tae Geon Kang, Sung Jin Kim and Ahnryul Choi
Bioengineering 2025, 12(4), 367; https://doi.org/10.3390/bioengineering12040367 - 31 Mar 2025
Viewed by 675
Abstract
This study introduces a deep learning-based strategy to automatically detect the L3 slice and segment abdominal tissues from computed tomography (CT) images. Accurate measurement of muscle and fat composition at the L3 level is critical as it can serve as a prognostic biomarker [...] Read more.
This study introduces a deep learning-based strategy to automatically detect the L3 slice and segment abdominal tissues from computed tomography (CT) images. Accurate measurement of muscle and fat composition at the L3 level is critical as it can serve as a prognostic biomarker for cancer diagnosis and treatment. However, current manual approaches are time-consuming and prone to class imbalance, since L3 slices constitute only a small fraction of the entire CT dataset. In this study, we propose an optimization-incorporated strategy that integrates augmentation ratio and class weight adjustment as correction design variables within deep learning models. In this retrospective study, the CT dataset was privately collected from 150 prostate cancer and bladder cancer patients at the Department of Urology of Gangneung Asan Hospital. A ResNet50 classifier was used to detect the L3 slice, while standard Unet, Swin-Unet, and SegFormer models were employed to segment abdominal tissues. Bayesian optimization determines optimal augmentation ratios and class weights, mitigating the imbalanced distribution of L3 slices and abdominal tissues. Evaluation of CT data from 150 prostate and bladder cancer patients showed that the optimized models reduced the slice detection error to approximately 0.68 ± 1.26 slices and achieved a Dice coefficient of up to 0.987 ± 0.001 for abdominal tissue segmentation-improvements over the models that did not consider correction design variables. This study confirms that balancing class distribution and properly tuning model parameters enhances performance. The proposed approach may provide reliable and automated biomarkers for early cancer diagnosis and personalized treatment planning. Full article
Show Figures

Figure 1

26 pages, 1502 KiB  
Article
A Privacy-Preserving and Attack-Aware AI Approach for High-Risk Healthcare Systems Under the EU AI Act
by Konstantinos Kalodanis, Georgios Feretzakis, Athanasios Anastasiou, Panagiotis Rizomiliotis, Dimosthenis Anagnostopoulos and Yiannis Koumpouros
Electronics 2025, 14(7), 1385; https://doi.org/10.3390/electronics14071385 - 30 Mar 2025
Cited by 1 | Viewed by 1718
Abstract
Artificial intelligence (AI) has significantly driven advancement in the healthcare field by enabling the integration of highly advanced algorithms to improve diagnostics, patient surveillance, and treatment planning. Nonetheless, dependence on sensitive health data and automated decision-making exposes such systems to escalating risks of [...] Read more.
Artificial intelligence (AI) has significantly driven advancement in the healthcare field by enabling the integration of highly advanced algorithms to improve diagnostics, patient surveillance, and treatment planning. Nonetheless, dependence on sensitive health data and automated decision-making exposes such systems to escalating risks of privacy breaches and is under rigorous regulatory oversight. In particular, the EU AI Act classifies AI uses pertaining to healthcare as “high-risk”, thus requiring the application of strict provisions related to transparency, safety, and privacy. This paper presents a comprehensive overview of the diverse privacy attacks that can target machine learning (ML)-based healthcare systems, including data-centric and model-centric attacks. We then propose a novel privacy-preserving architecture that integrates federated learning with secure computation protocols to minimally expose data while ensuring strong model performance. We outline an ongoing monitoring mechanism compliant with EU AI Act specifications and GDPR standards to further improve trust and compliance. We further elaborate on an independent adaptive algorithm that automatically tunes the level of cryptographic protection based on contextual factors like risk severity, computational capacity, and regulatory environment. This research aims to serve as a blueprint for designing trustworthy, high-risk AI systems in healthcare under emerging regulations by providing an in-depth review of ML-specific privacy threats and proposing a holistic technical solution. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 12145 KiB  
Article
A Deep Learning-Based Detection and Segmentation System for Multimodal Ultrasound Images in the Evaluation of Superficial Lymph Node Metastases
by Roxana Rusu-Both, Marius-Cristian Socaci, Adrian-Ionuț Palagos, Corina Buzoianu, Camelia Avram, Honoriu Vălean and Romeo-Ioan Chira
J. Clin. Med. 2025, 14(6), 1828; https://doi.org/10.3390/jcm14061828 - 8 Mar 2025
Cited by 1 | Viewed by 1269
Abstract
Background/Objectives: Even with today’s advancements, cancer still represents a major cause of mortality worldwide. One important aspect of cancer progression that has a big impact on diagnosis, prognosis, and treatment plans is accurate lymph node metastasis evaluation. However, regardless of the imaging [...] Read more.
Background/Objectives: Even with today’s advancements, cancer still represents a major cause of mortality worldwide. One important aspect of cancer progression that has a big impact on diagnosis, prognosis, and treatment plans is accurate lymph node metastasis evaluation. However, regardless of the imaging method used, this process is challenging and time-consuming. This research aimed to develop and validate an automatic detection and segmentation system for superficial lymph node evaluation based on multimodal ultrasound images, such as traditional B-mode, Doppler, and elastography, using deep learning techniques. Methods: The suggested approach incorporated a Mask R-CNN architecture designed specifically for the detection and segmentation of lymph nodes. The pipeline first involved noise reduction preprocessing, after which morphological and textural feature segmentation and analysis were performed. Vascularity and stiffness parameters were further examined in Doppler and elastography pictures. Metrics, including accuracy, mean average precision (mAP), and dice coefficient, were used to assess the system’s performance during training and validation on a carefully selected dataset of annotated ultrasound pictures. Results: During testing, the Mask R-CNN model showed an accuracy of 92.56%, a COCO AP score of 60.7 and a validation score of 64. Furter on, to improve diagnostic capabilities, Doppler and elastography data were added. This allowed for improved performance across several types of ultrasound images and provided thorough insights into the morphology, vascularity, and stiffness of lymph nodes. Conclusions: This paper offers a novel use of deep learning for automated lymph node assessment in ultrasound imaging. This system offers a dependable tool for doctors to evaluate lymph node metastases efficiently by fusing sophisticated segmentation techniques with multimodal image processing. It has the potential to greatly enhance patient outcomes and diagnostic accuracy. Full article
Show Figures

Figure 1

13 pages, 6870 KiB  
Article
Intra-Arterial Super-Selective Delivery of Yttrium-90 for the Treatment of Recurrent Glioblastoma: In Silico Proof of Concept with Feasibility and Safety Analysis
by Giulia Paolani, Silvia Minosse, Silvia Strolin, Miriam Santoro, Noemi Pucci, Francesca Di Giuliano, Francesco Garaci, Letizia Oddo, Yosra Toumia, Eugenia Guida, Francesco Riccitelli, Giulia Perilli, Alessandra Vitaliti, Angelico Bedini, Susanna Dolci, Gaio Paradossi, Fabio Domenici, Valerio Da Ros and Lidia Strigari
Pharmaceutics 2025, 17(3), 345; https://doi.org/10.3390/pharmaceutics17030345 - 7 Mar 2025
Viewed by 828
Abstract
Background: Intra-arterial cerebral infusion (IACI) of radiotherapeutics is a promising treatment for glioblastoma (GBM) recurrence. We investigated the in silico feasibility and safety of Yttrium-90-Poly(vinyl alcohol)-Microbubble (90Y-PVA-MB) IACI in patients with recurrent GBM and compared the results with those of [...] Read more.
Background: Intra-arterial cerebral infusion (IACI) of radiotherapeutics is a promising treatment for glioblastoma (GBM) recurrence. We investigated the in silico feasibility and safety of Yttrium-90-Poly(vinyl alcohol)-Microbubble (90Y-PVA-MB) IACI in patients with recurrent GBM and compared the results with those of external beam radiation therapy (EBRT). Methods: Contrast-enhanced T1-weighted magnetic resonance imaging (T1W-MRI) was used to delineate the tumor volumes and CT scans were used to automatically segment the organs at risk in nine patients with recurrent GBM. Volumetric Modulated Arc Therapy (VMAT) treatment plans were generated using a clinical treatment planning system. Assuming the relative intensity of each voxel from the MR-T1W as a valid surrogate for the post-IACI 90Y-PVA-MB distribution, a specific 90Y dose voxel kernel was obtained through Monte Carlo (MC) simulations and convolved with the MRI, resulting in a 90Y-PVA-MB-based dose distribution that was then compared with the VMAT plans. Results: The physical dose distribution obtained from the simulation of 1GBq of 90Y-PVA-MBs was rescaled to ensure that 95% of the prescribed dose was delivered to 95% or 99% of the target (i.e., A95% and A99%, respectively). The calculated activities were A95% = 269.2 [63.6–2334.1] MBq and A99% = 370.6 [93.8–3315.2] MBq, while the mean doses to the target were 58.2 [58.0–60.0] Gy for VMAT, and 123.1 [106.9–153.9] Gy and 170.1 [145.9–223.8] Gy for A95% and A99%, respectively. Additionally, non-target brain tissue was spared in the 90Y-PVA-MB treatment compared to the VMAT approach, with a median [range] of mean doses of 12.5 [12.0–23.0] Gy for VMAT, and 0.6 [0.2–1.0] Gy and 0.9 [0.3–1.5] Gy for the 90Y treatments assuming A95% and A99%, respectively. Conclusions: 90Y-PVA-MB IACI using MR-T1W appears to be feasible and safe, as it enables the delivery of higher doses to tumors and lower doses to non-target volumes compared to the VMAT approach. Full article
(This article belongs to the Special Issue CNS Drug Delivery: Recent Advances and Challenges)
Show Figures

Figure 1

Back to TopTop