Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (193)

Search Parameters:
Keywords = multi-label diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 1313 KiB  
Review
Data Augmentation and Knowledge Transfer-Based Fault Detection and Diagnosis in Internet of Things-Based Solar Insecticidal Lamps: A Survey
by Zhengjie Wang, Xing Yang, Tongjie Li, Lei Shu, Kailiang Li and Xiaoyuan Jing
Electronics 2025, 14(15), 3113; https://doi.org/10.3390/electronics14153113 - 5 Aug 2025
Abstract
Internet of Things (IoT)-based solar insecticidal lamps (SIL-IoTs) offer an eco-friendly alternative by merging solar energy harvesting with intelligent sensing, advancing sustainable smart agriculture. However, SIL-IoTs encounter practical challenges, e.g., hardware aging, electromagnetic interference, and abnormal data patterns. Therefore, developing an effective fault [...] Read more.
Internet of Things (IoT)-based solar insecticidal lamps (SIL-IoTs) offer an eco-friendly alternative by merging solar energy harvesting with intelligent sensing, advancing sustainable smart agriculture. However, SIL-IoTs encounter practical challenges, e.g., hardware aging, electromagnetic interference, and abnormal data patterns. Therefore, developing an effective fault detection and diagnosis (FDD) system is essential. In this survey, we systematically identify and address the core challenges of implementing FDD of SIL-IoTs. Firstly, the fuzzy boundaries of sample features lead to complex feature interactions that increase the difficulty of accurate FDD. Secondly, the category imbalance in the fault samples limits the generalizability of the FDD models. Thirdly, models trained on single scenarios struggle to adapt to diverse and dynamic field conditions. To overcome these challenges, we propose a multi-level solution by discussing and merging existing FDD methods: (1) a data augmentation strategy can be adopted to improve model performance on small-sample datasets; (2) federated learning (FL) can be employed to enhance adaptability to heterogeneous environments, while transfer learning (TL) addresses data scarcity; and (3) deep learning techniques can be used to reduce dependence on labeled data; these methods provide a robust framework for intelligent and adaptive FDD of SIL-IoTs, supporting long-term reliability of IoT devices in smart agriculture. Full article
(This article belongs to the Collection Electronics for Agriculture)
Show Figures

Figure 1

18 pages, 9470 KiB  
Article
DCS-ST for Classification of Breast Cancer Histopathology Images with Limited Annotations
by Suxing Liu and Byungwon Min
Appl. Sci. 2025, 15(15), 8457; https://doi.org/10.3390/app15158457 - 30 Jul 2025
Viewed by 238
Abstract
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin [...] Read more.
Accurate classification of breast cancer histopathology images is critical for early diagnosis and treatment planning. Yet, conventional deep learning models face significant challenges under limited annotation scenarios due to their reliance on large-scale labeled datasets. To address this, we propose Dynamic Cross-Scale Swin Transformer (DCS-ST), a robust and efficient framework tailored for histopathology image classification with scarce annotations. Specifically, DCS-ST integrates a dynamic window predictor and a cross-scale attention module to enhance multi-scale feature representation and interaction while employing a semi-supervised learning strategy based on pseudo-labeling and denoising to exploit unlabeled data effectively. This design enables the model to adaptively attend to diverse tissue structures and pathological patterns while maintaining classification stability. Extensive experiments on three public datasets—BreakHis, Mini-DDSM, and ICIAR2018—demonstrate that DCS-ST consistently outperforms existing state-of-the-art methods across various magnifications and classification tasks, achieving superior quantitative results and reliable visual classification. Furthermore, empirical evaluations validate its strong generalization capability and practical potential for real-world weakly-supervised medical image analysis. Full article
Show Figures

Figure 1

14 pages, 1617 KiB  
Article
Multi-Label Conditioned Diffusion for Cardiac MR Image Augmentation and Segmentation
by Jianyang Li, Xin Ma and Yonghong Shi
Bioengineering 2025, 12(8), 812; https://doi.org/10.3390/bioengineering12080812 - 28 Jul 2025
Viewed by 329
Abstract
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are [...] Read more.
Accurate segmentation of cardiac MR images using deep neural networks is crucial for cardiac disease diagnosis and treatment planning, as it provides quantitative insights into heart anatomy and function. However, achieving high segmentation accuracy relies heavily on extensive, precisely annotated datasets, which are costly and time-consuming to obtain. This study addresses this challenge by proposing a novel data augmentation framework based on a condition-guided diffusion generative model, controlled by multiple cardiac labels. The framework aims to expand annotated cardiac MR datasets and significantly improve the performance of downstream cardiac segmentation tasks. The proposed generative data augmentation framework operates in two stages. First, a Label Diffusion Module is trained to unconditionally generate realistic multi-category spatial masks (encompassing regions such as the left ventricle, interventricular septum, and right ventricle) conforming to anatomical prior probabilities derived from noise. Second, cardiac MR images are generated conditioned on these semantic masks, ensuring a precise one-to-one mapping between synthetic labels and images through the integration of a spatially-adaptive normalization (SPADE) module for structural constraint during conditional model training. The effectiveness of this augmentation strategy is demonstrated using the U-Net model for segmentation on the enhanced 2D cardiac image dataset derived from the M&M Challenge. Results indicate that the proposed method effectively increases dataset sample numbers and significantly improves cardiac segmentation accuracy, achieving a 5% to 10% higher Dice Similarity Coefficient (DSC) compared to traditional data augmentation methods. Experiments further reveal a strong correlation between image generation quality and augmentation effectiveness. This framework offers a robust solution for data scarcity in cardiac image analysis, directly benefiting clinical applications. Full article
Show Figures

Figure 1

17 pages, 1738 KiB  
Article
Multimodal Fusion Multi-Task Learning Network Based on Federated Averaging for SDB Severity Diagnosis
by Songlu Lin, Renzheng Tang, Yuzhe Wang and Zhihong Wang
Appl. Sci. 2025, 15(14), 8077; https://doi.org/10.3390/app15148077 - 20 Jul 2025
Viewed by 512
Abstract
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant [...] Read more.
Accurate sleep staging and sleep-disordered breathing (SDB) severity prediction are critical for the early diagnosis and management of sleep disorders. However, real-world polysomnography (PSG) data often suffer from modality heterogeneity, label scarcity, and non-independent and identically distributed (non-IID) characteristics across institutions, posing significant challenges for model generalization and clinical deployment. To address these issues, we propose a federated multi-task learning (FMTL) framework that simultaneously performs sleep staging and SDB severity classification from seven multimodal physiological signals, including EEG, ECG, respiration, etc. The proposed framework is built upon a hybrid deep neural architecture that integrates convolutional layers (CNN) for spatial representation, bidirectional GRUs for temporal modeling, and multi-head self-attention for long-range dependency learning. A shared feature extractor is combined with task-specific heads to enable joint diagnosis, while the FedAvg algorithm is employed to facilitate decentralized training across multiple institutions without sharing raw data, thereby preserving privacy and addressing non-IID challenges. We evaluate the proposed method across three public datasets (APPLES, SHHS, and HMC) treated as independent clients. For sleep staging, the model achieves accuracies of 85.3% (APPLES), 87.1% (SHHS_rest), and 79.3% (HMC), with Cohen’s Kappa scores exceeding 0.71. For SDB severity classification, it obtains macro-F1 scores of 77.6%, 76.4%, and 79.1% on APPLES, SHHS_rest, and HMC, respectively. These results demonstrate that our unified FMTL framework effectively leverages multimodal PSG signals and federated training to deliver accurate and scalable sleep disorder assessment, paving the way for the development of a privacy-preserving, generalizable, and clinically applicable digital sleep monitoring system. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

20 pages, 688 KiB  
Article
Multi-Modal AI for Multi-Label Retinal Disease Prediction Using OCT and Fundus Images: A Hybrid Approach
by Amina Zedadra, Mahmoud Yassine Salah-Salah, Ouarda Zedadra and Antonio Guerrieri
Sensors 2025, 25(14), 4492; https://doi.org/10.3390/s25144492 - 19 Jul 2025
Viewed by 523
Abstract
Ocular diseases can significantly affect vision and overall quality of life, with diagnosis often being time-consuming and dependent on expert interpretation. While previous computer-aided diagnostic systems have focused primarily on medical imaging, this paper proposes VisionTrack, a multi-modal AI system for predicting multiple [...] Read more.
Ocular diseases can significantly affect vision and overall quality of life, with diagnosis often being time-consuming and dependent on expert interpretation. While previous computer-aided diagnostic systems have focused primarily on medical imaging, this paper proposes VisionTrack, a multi-modal AI system for predicting multiple retinal diseases, including Diabetic Retinopathy (DR), Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), drusen, Central Serous Retinopathy (CSR), and Macular Hole (MH), as well as normal cases. The proposed framework integrates a Convolutional Neural Network (CNN) for image-based feature extraction, a Graph Neural Network (GNN) to model complex relationships among clinical risk factors, and a Large Language Model (LLM) to process patient medical reports. By leveraging diverse data sources, VisionTrack improves prediction accuracy and offers a more comprehensive assessment of retinal health. Experimental results demonstrate the effectiveness of this hybrid system, highlighting its potential for early detection, risk assessment, and personalized ophthalmic care. Experiments were conducted using two publicly available datasets, RetinalOCT and RFMID, which provide diverse retinal imaging modalities: OCT images and fundus images, respectively. The proposed multi-modal AI system demonstrated strong performance in multi-label disease prediction. On the RetinalOCT dataset, the model achieved an accuracy of 0.980, F1-score of 0.979, recall of 0.978, and precision of 0.979. Similarly, on the RFMID dataset, it reached an accuracy of 0.989, F1-score of 0.881, recall of 0.866, and precision of 0.897. These results confirm the robustness, reliability, and generalization capability of the proposed approach across different imaging modalities. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 3474 KiB  
Article
Research on Unsupervised Domain Adaptive Bearing Fault Diagnosis Method Based on Migration Learning Using MSACNN-IJMMD-DANN
by Xiaoxu Li, Jiahao Wang, Jianqiang Wang, Jixuan Wang, Qinghua Li, Xuelian Yu and Jiaming Chen
Machines 2025, 13(7), 618; https://doi.org/10.3390/machines13070618 - 17 Jul 2025
Viewed by 294
Abstract
To address the problems of feature extraction, cost of obtaining labeled samples, and large differences in domain distribution in bearing fault diagnosis on variable operating conditions, an unsupervised domain-adaptive bearing fault diagnosis method based on migration learning using MSACNN-IJMMD-DANN (multi-scale and attention-based convolutional [...] Read more.
To address the problems of feature extraction, cost of obtaining labeled samples, and large differences in domain distribution in bearing fault diagnosis on variable operating conditions, an unsupervised domain-adaptive bearing fault diagnosis method based on migration learning using MSACNN-IJMMD-DANN (multi-scale and attention-based convolutional neural network, MSACNN, improved joint maximum mean discrepancy, IJMMD, domain adversarial neural network, DANN) is proposed. Firstly, in order to extract fault-type features from the source domain and target domain, this paper establishes a MSACNN based on multi-scale and attention mechanisms. Secondly, to reduce the feature distribution difference between the source and target domains and address the issue of domain distribution differences, the joint maximum mean discrepancy and correlation alignment approaches are used to create the metric criterion. Then, the adversarial loss mechanism in DANN is introduced to reduce the interference of weakly correlated domain features for better fault diagnosis and identification. Finally, the method is validated using bearing datasets from Case Western Reserve University, Jiangnan University, and our laboratory. The experimental results demonstrated that the method achieved higher accuracy across different migration tasks, providing an effective solution for bearing fault diagnosis in industrial environments with varying operating conditions. Full article
Show Figures

Figure 1

22 pages, 2320 KiB  
Review
Use of Radiomics in Characterizing Tumor Hypoxia
by Mohan Huang, Helen K. W. Law and Shing Yau Tam
Int. J. Mol. Sci. 2025, 26(14), 6679; https://doi.org/10.3390/ijms26146679 - 11 Jul 2025
Viewed by 474
Abstract
Tumor hypoxia involves limited oxygen supply within the tumor microenvironment and is closely associated with aggressiveness, metastasis, and resistance to common cancer treatment modalities such as chemotherapy and radiotherapy. Traditional methodologies for hypoxia assessment, such as the use of invasive probes and clinical [...] Read more.
Tumor hypoxia involves limited oxygen supply within the tumor microenvironment and is closely associated with aggressiveness, metastasis, and resistance to common cancer treatment modalities such as chemotherapy and radiotherapy. Traditional methodologies for hypoxia assessment, such as the use of invasive probes and clinical biomarkers, are generally not very suitable for routine clinical applications. Radiomics provides a non-invasive approach to hypoxia assessment by extracting quantitative features from medical images. Thus, radiomics is important in diagnosis and the formulation of a treatment strategy for tumor hypoxia. This article discusses the various imaging techniques used for the assessment of tumor hypoxia including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). It introduces the use of radiomics with machine learning and deep learning for extracting quantitative features, along with its possible clinical use in hypoxic tumors. This article further summarizes the key challenges hindering the clinical translation of radiomics, including the lack of imaging standardization and the limited availability of hypoxia-labeled datasets. It also highlights the potential of integrating radiomics with multi-omics to enhance hypoxia visualization and guide personalized cancer treatment. Full article
Show Figures

Figure 1

18 pages, 1667 KiB  
Article
Multi-Task Deep Learning for Simultaneous Classification and Segmentation of Cancer Pathologies in Diverse Medical Imaging Modalities
by Maryem Rhanoui, Khaoula Alaoui Belghiti and Mounia Mikram
Onco 2025, 5(3), 34; https://doi.org/10.3390/onco5030034 - 11 Jul 2025
Viewed by 395
Abstract
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances [...] Read more.
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances of cure. Despite that Deep Learning for cancer diagnosis has achieved clinically acceptable accuracy, there still remains challenging tasks, especially in the context of insufficient labeled data and the subsequent need for expensive computational ressources. Objective: This paper presents a lightweight classification and segmentation deep learning model to assist in the identification of cancerous tumors with high accuracy despite the scarcity of medical data. Methods: We propose a multi-task architecture for classification and segmentation of cancerous tumors in the Brain, Skin, Prostate and lungs. The model is based on the UNet architecture with different pre-trained deep learning models (VGG 16 and MobileNetv2) as a backbone. The multi-task model is validated on relatively small datasets (slightly exceed 1200 images) that are diverse in terms of modalities (IRM, X-Ray, Dermoscopic and Digital Histopathology), number of classes, shapes, and sizes of cancer pathologies using the accuracy and dice coefficient as statistical metrics. Results: Experiments show that the multi-task approach improve the learning efficiency and the prediction accuracy for the segmentation and classification tasks, compared to training the individual models separately. The multi-task architecture reached a classification accuracy of 86%, 90%, 88%, and 87% respectively for Skin Lesion, Brain Tumor, Prostate Cancer and Pneumothorax. For the segmentation tasks we were able to achieve high precisions respectively 95%, 98% for the Skin Lesion and Brain Tumor segmentation and a 99% precise segmentation for both Prostate cancer and Pneumothorax. Proving that the multi-task solution is more efficient than single-task networks. Full article
Show Figures

Figure 1

27 pages, 10447 KiB  
Article
Supervised Learning-Based Fault Classification in Industrial Rotating Equipment Using Multi-Sensor Data
by Aziz Kubilay Ovacıklı, Mert Yagcioglu, Sevgi Demircioglu, Tugberk Kocatekin and Sibel Birtane
Appl. Sci. 2025, 15(13), 7580; https://doi.org/10.3390/app15137580 - 6 Jul 2025
Viewed by 717
Abstract
The reliable operation of rotating machinery is critical in industrial production, necessitating advanced fault diagnosis and maintenance strategies to ensure operational availability. This study employs supervised machine learning algorithms to apply multi-label classification for fault detection in rotating machinery, utilizing a real dataset [...] Read more.
The reliable operation of rotating machinery is critical in industrial production, necessitating advanced fault diagnosis and maintenance strategies to ensure operational availability. This study employs supervised machine learning algorithms to apply multi-label classification for fault detection in rotating machinery, utilizing a real dataset from multi-sensor systems installed on a suction fan in a typical manufacturing industry. The presented system focuses on multi-modal data analysis, such as vibration analysis, temperature monitoring, and ultrasound, for more effective fault diagnosis. The performance of general machine learning algorithms such as kNN, SVM, RF, and some boosting techniques was evaluated, and it was shown that the Random Forest achieved the best classification accuracy. Feature importance analysis has revealed how specific domain characteristics, such as vibration velocity and ultrasound levels, contribute significantly to performance and enabled the detection of multiple faults simultaneously. The results demonstrate the machine learning model’s ability to retrieve valuable information from multi-sensor data integration, improving predictive maintenance strategies. The presented study contributes a practical framework in intelligent fault diagnosis as it presents an example of a real-world implementation while enabling future improvements in industrial condition-based maintenance systems. Full article
Show Figures

Figure 1

17 pages, 23962 KiB  
Article
AI-Powered Mobile App for Nuclear Cataract Detection
by Alicja Anna Ignatowicz, Tomasz Marciniak and Elżbieta Marciniak
Sensors 2025, 25(13), 3954; https://doi.org/10.3390/s25133954 - 25 Jun 2025
Viewed by 547
Abstract
Cataract remains the leading cause of blindness worldwide, and the number of individuals affected by this condition is expected to rise significantly due to global population ageing. Early diagnosis is crucial, as delayed treatment may result in irreversible vision loss. This study explores [...] Read more.
Cataract remains the leading cause of blindness worldwide, and the number of individuals affected by this condition is expected to rise significantly due to global population ageing. Early diagnosis is crucial, as delayed treatment may result in irreversible vision loss. This study explores and presents a mobile application for Android devices designed for the detection of cataracts using deep learning models. The proposed solution utilizes a multi-stage classification approach to analyze ocular images acquired with a slit lamp, sourced from the Nuclear Cataract Database for Biomedical and Machine Learning Applications. The process involves identifying pathological features and assessing the severity of the detected condition, enabling comprehensive characterization of the NC (nuclear cataract) of cataract progression based on the LOCS III scale classification. The evaluation included a range of convolutional neural network architectures, from larger models like VGG16 and ResNet50, to lighter alternatives such as VGG11, ResNet18, MobileNetV2, and EfficientNet-B0. All models demonstrated comparable performance, with classification accuracies exceeding 91–94.5%. The trained models were optimized for mobile deployment, enabling real-time analysis of eye images captured with the device camera or selected from local storage. The presented mobile application, trained and validated on authentic clinician-labeled pictures, represents a significant advancement over existing mobile tools. The preliminary evaluations demonstrated a high accuracy in cataract detection and severity grading. These results confirm the approach is feasible and will serve as the foundation for ongoing development and extensions. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Biomedical Optics and Imaging)
Show Figures

Figure 1

28 pages, 4916 KiB  
Article
Research on Bearing Fault Diagnosis Method for Varying Operating Conditions Based on Spatiotemporal Feature Fusion
by Jin Wang, Yan Wang, Junhui Yu, Qingping Li, Hailin Wang and Xinzhi Zhou
Sensors 2025, 25(12), 3789; https://doi.org/10.3390/s25123789 - 17 Jun 2025
Viewed by 424
Abstract
In real-world scenarios, the rotational speed of bearings is variable. Due to changes in operating conditions, the feature distribution of bearing vibration data becomes inconsistent, which leads to the inability to directly apply the training model built under one operating condition (source domain) [...] Read more.
In real-world scenarios, the rotational speed of bearings is variable. Due to changes in operating conditions, the feature distribution of bearing vibration data becomes inconsistent, which leads to the inability to directly apply the training model built under one operating condition (source domain) to another condition (target domain). Furthermore, the lack of sufficient labeled data in the target domain further complicates fault diagnosis under varying operating conditions. To address this issue, this paper proposes a spatiotemporal feature fusion domain-adaptive network (STFDAN) framework for bearing fault diagnosis under varying operating conditions. The framework constructs a feature extraction and domain adaptation network based on a parallel architecture, designed to capture the complex dynamic characteristics of vibration signals. First, the Fast Fourier Transform (FFT) and Variational Mode Decomposition (VMD) are used to extract the spectral and modal features of the signals, generating a joint representation with multi-level information. Then, a parallel processing mechanism of the Convolutional Neural Network (SECNN) based on the Squeeze-and-Excitation module and the Bidirectional Long Short-Term Memory network (BiLSTM) is employed to dynamically adjust weights, capturing high-dimensional spatiotemporal features. The cross-attention mechanism enables the interaction and fusion of spatial and temporal features, significantly enhancing the complementarity and coupling of the feature representations. Finally, a Multi-Kernel Maximum Mean Discrepancy (MKMMD) is introduced to align the feature distributions between the source and target domains, enabling efficient fault diagnosis under varying bearing conditions. The proposed STFDAN framework is evaluated using bearing datasets from Case Western Reserve University (CWRU), Jiangnan University (JNU), and Southeast University (SEU). Experimental results demonstrate that STFDAN achieves high diagnostic accuracy across different load conditions and effectively solves the bearing fault diagnosis problem under varying operating conditions. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

15 pages, 7136 KiB  
Article
Source-Free Domain Adaptation for Cross-Modality Abdominal Multi-Organ Segmentation Challenges
by Xiyu Zhang, Xu Chen, Yang Wang, Dongliang Liu and Yifeng Hong
Information 2025, 16(6), 460; https://doi.org/10.3390/info16060460 - 29 May 2025
Viewed by 427
Abstract
Abdominal organ segmentation in CT images is crucial for accurate diagnosis, treatment planning, and condition monitoring. However, the annotation process is often hindered by challenges such as low contrast, artifacts, and complex organ structures. While unsupervised domain adaptation (UDA) has shown promise in [...] Read more.
Abdominal organ segmentation in CT images is crucial for accurate diagnosis, treatment planning, and condition monitoring. However, the annotation process is often hindered by challenges such as low contrast, artifacts, and complex organ structures. While unsupervised domain adaptation (UDA) has shown promise in addressing these issues by transferring knowledge from a different modality (source domain), its reliance on both source and target data during training presents a practical challenge in many clinical settings due to data privacy concerns. This study aims to develop a cross-modality abdominal multi-organ segmentation model for label-free CT (target domain) data, leveraging knowledge solely from a pre-trained source domain (MRI) model without accessing the source data. To achieve this, we generate source-like images from target-domain images using a one-way image translation approach with the pre-trained model. These synthesized images preserve the anatomical structure of the target, enabling segmentation predictions from the pre-trained model. To further enhance segmentation accuracy, particularly for organ boundaries and small contours, we introduce an auxiliary translation module with an image decoder and multi-level discriminator. The results demonstrate significant improvements across several performance metrics, including the Dice similarity coefficient (DSC) and average symmetric surface distance (ASSD), highlighting the effectiveness of the proposed method. Full article
Show Figures

Figure 1

15 pages, 3148 KiB  
Article
Comparison of mpMRI and 68Ga-PSMA-PET/CT in the Assessment of the Primary Tumors in Predominant Low-/Intermediate-Risk Prostate Cancer
by Moritz J. Argow, Sebastian Hupfeld, Simone A. Schenke, Sophie Neumann, Romy Damm, Johanna Vogt, Melis Guer, Jan Wuestemann, Martin Schostak, Frank Fischbach and Michael C. Kreissl
Diagnostics 2025, 15(11), 1358; https://doi.org/10.3390/diagnostics15111358 - 28 May 2025
Viewed by 614
Abstract
While multi-parametric magnetic resonance imaging (mpMRI) is known to be a specific and reliable modality for the diagnosis of non-metastatic prostate cancer (PC), positron emission tomography (PET) using 68Ga labeled ligands targeting the prostate-specific membrane antigen (PSMA) is known for its reliable [...] Read more.
While multi-parametric magnetic resonance imaging (mpMRI) is known to be a specific and reliable modality for the diagnosis of non-metastatic prostate cancer (PC), positron emission tomography (PET) using 68Ga labeled ligands targeting the prostate-specific membrane antigen (PSMA) is known for its reliable detection of prostate cancer, being the most sensitive modality for the assessment of the extra-prostatic extension of the disease and the establishment of a diagnosis, even before biopsy. Background/Objectives: Here, we compared these modalities in regards to the localization of intraprostatic cancer lesions prior to local HDR brachytherapy. Methods: A cohort of 27 patients received both mpMRI and PSMA-PET/CT. Based on 24 intraprostatic segments, two readers each scored the risk of tumor-like alteration in each imaging modality. The detectability was evaluated using receiver operating characteristic (ROC) analysis. The histopathological findings from biopsy were used as the gold standard in each segment. In addition, we applied a patient-based “congruence” concept to quantify the interobserver and intermodality agreement. Results: For the ROC analysis, we included 447 segments (19 patients), with their respective histological references. The two readers of the MRI reached an AUC of 0.770 and 0.781, respectively, with no significant difference (p = 0.75). The PET/CT readers reached an AUC of 0.684 and 0.608, respectively, with a significant difference (p < 0.001). The segment-wise intermodality comparison showed a significant superiority of MRI (AUC = 0.815) compared to PET/CT (AUC = 0.690) (p = 0.006). Via a patient-based analysis, a superiority of MRI in terms of relative agreement with the biopsy result was observed (n = 19 patients). We found congruence scores of 83% (MRI) and 76% (PET/CT, p = 0.034), respectively. Using an adjusted “near total agreement” score (adjacent segments with positive scores of 4 or 5 counted as congruent), we found an increase in the agreement, with a score of 96.5% for MRI and 92.7% for PET/CT, with significant difference (p = 0.024). Conclusions: This study suggests that in a small collective of low-/intermediate risk prostate cancer, mpMRI is superior for the detection of intraprostatic lesions as compared to PSMA-PET/CT. We also found a higher relative agreement between MRI and biopsy as compared to that for PET/CT. However, further studies including a larger number of patients and readers are necessary to draw solid conclusions. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

22 pages, 1882 KiB  
Article
Optimizing CNN-Based Diagnosis of Knee Osteoarthritis: Enhancing Model Accuracy with CleanLab Relabeling
by Thomures Momenpour and Arafat Abu Mallouh
Diagnostics 2025, 15(11), 1332; https://doi.org/10.3390/diagnostics15111332 - 26 May 2025
Viewed by 1028
Abstract
Background: Knee Osteoarthritis (KOA) is a prevalent and debilitating joint disorder that significantly impacts quality of life, particularly in aging populations. Accurate and consistent classification of KOA severity, typically using the Kellgren-Lawrence (KL) grading system, is crucial for effective diagnosis, treatment planning, and [...] Read more.
Background: Knee Osteoarthritis (KOA) is a prevalent and debilitating joint disorder that significantly impacts quality of life, particularly in aging populations. Accurate and consistent classification of KOA severity, typically using the Kellgren-Lawrence (KL) grading system, is crucial for effective diagnosis, treatment planning, and monitoring disease progression. However, traditional KL grading is known for its inherent subjectivity and inter-rater variability, which underscores the pressing need for objective, automated, and reliable classification methods. Methods: This study investigates the performance of an EfficientNetB5 deep learning model, enhanced with transfer learning from the ImageNet dataset, for the task of classifying KOA severity into five distinct KL grades (0–4). We utilized a publicly available Kaggle dataset comprising 9786 knee X-ray images. A key aspect of our methodology was a comprehensive data-centric preprocessing pipeline, which involved an initial phase of outlier removal to reduce noise, followed by systematic label correction using the Cleanlab framework to identify and rectify potential inconsistencies within the original dataset labels. Results: The final EfficientNetB5 model, trained on the preprocessed and Cleanlab-remediated data, achieved an overall accuracy of 82.07% on the test set. This performance represents a significant improvement over previously reported benchmarks for five-class KOA classification on this dataset, such as ResNet-101 which achieved 69% accuracy. The substantial enhancement in model performance is primarily attributed to Cleanlab’s robust ability to detect and correct mislabeled instances, thereby improving the overall quality and reliability of the training data and enabling the model to better learn and capture complex radiographic patterns associated with KOA. Class-wise performance analysis indicated strong differentiation between healthy (KL Grade 0) and severe (KL Grade 4) cases. However, the “Doubtful” (KL Grade 1) class presented ongoing challenges, exhibiting lower recall and precision compared to other grades. When evaluated against other architectures like MobileNetV3 and Xception for multi-class tasks, our EfficientNetB5 demonstrated highly competitive results. Conclusions: The integration of an EfficientNetB5 model with a rigorous data-centric preprocessing approach, particularly Cleanlab-based label correction and outlier removal, provides a robust and significantly more accurate method for five-class KOA severity classification. While limitations in handling inherently ambiguous cases (such as KL Grade 1) and the small sample size for severe KOA warrant further investigation, this study demonstrates a promising pathway to enhance diagnostic precision. The developed pipeline shows considerable potential for future clinical applications, aiding in more objective and reliable KOA assessment. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

18 pages, 11369 KiB  
Article
Multi-Metric Fusion Hypergraph Neural Network for Rotating Machinery Fault Diagnosis
by Jiaxing Zhu, Junlan Hu and Buyun Sheng
Actuators 2025, 14(5), 242; https://doi.org/10.3390/act14050242 - 13 May 2025
Viewed by 514
Abstract
Effective fault diagnosis in rotating machinery means extracting fault features from complex samples. However, traditional data-driven methods often overly rely on labeled samples and struggle with extracting high-order complex features. To address these issues, a novel Multi-Metric Fusion Hypergraph Neural Network (MMF-HGNN) is [...] Read more.
Effective fault diagnosis in rotating machinery means extracting fault features from complex samples. However, traditional data-driven methods often overly rely on labeled samples and struggle with extracting high-order complex features. To address these issues, a novel Multi-Metric Fusion Hypergraph Neural Network (MMF-HGNN) is proposed for fault diagnosis in rotating machinery. The approach involves constructing hypergraphs for sample vertices using three metrics: instance distance, distribution distance, and spatiotemporal distance. An innovative hypergraph fusion strategy is then applied to integrate these normalized hypergraphs, and a dual-layer hypergraph neural network is utilized for fault diagnosis. Experimental results on three different fault datasets demonstrate that the MMF-HGNN method excels in feature extraction, reduces reliance on labeled samples, achieving a classification accuracy of 0.9965 ± 0.0025 even with only 5% of labeled samples, and shows strong robustness to noise across varying signal-to-noise ratios. Full article
(This article belongs to the Section Actuators for Manufacturing Systems)
Show Figures

Figure 1

Back to TopTop