Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (169)

Search Parameters:
Keywords = chest X-ray (CXR) image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 3368 KiB  
Article
Segmentation-Assisted Fusion-Based Classification for Automated CXR Image Analysis
by Shilu Kang, Dongfang Li, Jiaxin Xu, Aokun Mei and Hua Huo
Sensors 2025, 25(15), 4580; https://doi.org/10.3390/s25154580 - 24 Jul 2025
Viewed by 309
Abstract
Accurate classification of chest X-ray (CXR) images is crucial for diagnosing lung diseases in medical imaging. Existing deep learning models for CXR image classification face challenges in distinguishing non-lung features. In this work, we propose a new segmentation-assisted fusion-based classification method. The method [...] Read more.
Accurate classification of chest X-ray (CXR) images is crucial for diagnosing lung diseases in medical imaging. Existing deep learning models for CXR image classification face challenges in distinguishing non-lung features. In this work, we propose a new segmentation-assisted fusion-based classification method. The method involves two stages: first, we use a lightweight segmentation model, Partial Convolutional Segmentation Network (PCSNet) designed based on an encoder–decoder architecture, to accurately obtain lung masks from CXR images. Then, a fusion of the masked CXR image with the original image enables classification using the improved lightweight ShuffleNetV2 model. The proposed method is trained and evaluated on segmentation datasets including the Montgomery County Dataset (MC) and Shenzhen Hospital Dataset (SH), and classification datasets such as Chest X-Ray Images for Pneumonia (CXIP) and COVIDx. Compared with seven segmentation models (U-Net, Attention-Net, SegNet, FPNNet, DANet, DMNet, and SETR), five classification models (ResNet34, ResNet50, DenseNet121, Swin-Transforms, and ShuffleNetV2), and state-of-the-art methods, our PCSNet model achieved high segmentation performance on CXR images. Compared to the state-of-the-art Attention-Net model, the accuracy of PCSNet increased by 0.19% (98.94% vs. 98.75%), and the boundary accuracy improved by 0.3% (97.86% vs. 97.56%), while requiring 62% fewer parameters. For pneumonia classification using the CXIP dataset, the proposed strategy outperforms the current best model by 0.14% in accuracy (98.55% vs. 98.41%). For COVID-19 classification with the COVIDx dataset, the model reached an accuracy of 97.50%, the absolute improvement in accuracy compared to CovXNet was 0.1%, and clinical metrics demonstrate more significant gains: specificity increased from 94.7% to 99.5%. These results highlight the model’s effectiveness in medical image analysis, demonstrating clinically meaningful improvements over state-of-the-art approaches. Full article
(This article belongs to the Special Issue Vision- and Image-Based Biomedical Diagnostics—2nd Edition)
Show Figures

Figure 1

15 pages, 1758 KiB  
Article
Eye-Guided Multimodal Fusion: Toward an Adaptive Learning Framework Using Explainable Artificial Intelligence
by Sahar Moradizeyveh, Ambreen Hanif, Sidong Liu, Yuankai Qi, Amin Beheshti and Antonio Di Ieva
Sensors 2025, 25(15), 4575; https://doi.org/10.3390/s25154575 - 24 Jul 2025
Viewed by 238
Abstract
Interpreting diagnostic imaging and identifying clinically relevant features remain challenging tasks, particularly for novice radiologists who often lack structured guidance and expert feedback. To bridge this gap, we propose an Eye-Gaze Guided Multimodal Fusion framework that leverages expert eye-tracking data to enhance learning [...] Read more.
Interpreting diagnostic imaging and identifying clinically relevant features remain challenging tasks, particularly for novice radiologists who often lack structured guidance and expert feedback. To bridge this gap, we propose an Eye-Gaze Guided Multimodal Fusion framework that leverages expert eye-tracking data to enhance learning and decision-making in medical image interpretation. By integrating chest X-ray (CXR) images with expert fixation maps, our approach captures radiologists’ visual attention patterns and highlights regions of interest (ROIs) critical for accurate diagnosis. The fusion model utilizes a shared backbone architecture to jointly process image and gaze modalities, thereby minimizing the impact of noise in fixation data. We validate the system’s interpretability using Gradient-weighted Class Activation Mapping (Grad-CAM) and assess both classification performance and explanation alignment with expert annotations. Comprehensive evaluations, including robustness under gaze noise and expert clinical review, demonstrate the framework’s effectiveness in improving model reliability and interpretability. This work offers a promising pathway toward intelligent, human-centered AI systems that support both diagnostic accuracy and medical training. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 637 KiB  
Review
Deep Learning Network Selection and Optimized Information Fusion for Enhanced COVID-19 Detection: A Literature Review
by Olga Adriana Caliman Sturdza, Florin Filip, Monica Terteliu Baitan and Mihai Dimian
Diagnostics 2025, 15(14), 1830; https://doi.org/10.3390/diagnostics15141830 - 21 Jul 2025
Viewed by 1079
Abstract
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and [...] Read more.
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and performance of DL architectures, notably convolutional neural networks (CNNs) and emerging vision transformers (ViTs), in identifying COVID-19-related lung abnormalities. Individual ResNet architectures, along with CNN models, demonstrate strong diagnostic performance through the transfer protocol; however, ViTs provide better performance, with improved readability and reduced data requirements. Multimodal diagnostic systems now incorporate alternative methods, in addition to imaging, which use lung ultrasounds, clinical data, and cough sound evaluation. Information fusion techniques, which operate at the data, feature, and decision levels, enhance diagnostic performance. However, progress in COVID-19 detection is hindered by ongoing issues stemming from restricted and non-uniform datasets, as well as domain differences in image standards and complications with both diagnostic overfitting and poor generalization capabilities. Recent developments in COVID-19 diagnosis involve constructing expansive multi-noise information sets while creating clinical process-oriented AI algorithms and implementing distributed learning protocols for securing information security and system stability. While deep learning-based COVID-19 detection systems show strong potential for clinical application, broader validation, regulatory approvals, and continuous adaptation remain essential for their successful deployment and for preparing future pandemic response strategies. Full article
Show Figures

Figure 1

33 pages, 5602 KiB  
Article
CELM: An Ensemble Deep Learning Model for Early Cardiomegaly Diagnosis in Chest Radiography
by Erdem Yanar, Fırat Hardalaç and Kubilay Ayturan
Diagnostics 2025, 15(13), 1602; https://doi.org/10.3390/diagnostics15131602 - 25 Jun 2025
Viewed by 543
Abstract
Background/Objectives: Cardiomegaly—defined as the abnormal enlargement of the heart—is a key radiological indicator of various cardiovascular conditions. Early detection is vital for initiating timely clinical intervention and improving patient outcomes. This study investigates the application of deep learning techniques for the automated diagnosis [...] Read more.
Background/Objectives: Cardiomegaly—defined as the abnormal enlargement of the heart—is a key radiological indicator of various cardiovascular conditions. Early detection is vital for initiating timely clinical intervention and improving patient outcomes. This study investigates the application of deep learning techniques for the automated diagnosis of cardiomegaly from chest X-ray (CXR) images, utilizing both convolutional neural networks (CNNs) and Vision Transformers (ViTs). Methods: We assembled one of the largest and most diverse CXR datasets to date, combining posteroanterior (PA) images from PadChest, NIH CXR, VinDr-CXR, and CheXpert. Multiple pre-trained CNN architectures (VGG16, ResNet50, InceptionV3, DenseNet121, DenseNet201, and AlexNet), as well as Vision Transformer models, were trained and compared. In addition, we introduced a novel stacking-based ensemble model—Combined Ensemble Learning Model (CELM)—that integrates complementary CNN features via a meta-classifier. Results: The CELM achieved the highest diagnostic performance, with a test accuracy of 92%, precision of 99%, recall of 89%, F1-score of 0.94, specificity of 92.0%, and AUC of 0.90. These results highlight the model’s high agreement with expert annotations and its potential for reliable clinical use. Notably, Vision Transformers offered competitive performance, suggesting their value as complementary tools alongside CNNs. Conclusions: With further validation, the proposed CELM framework may serve as an efficient and scalable decision-support tool for cardiomegaly screening, particularly in resource-limited settings such as intensive care units (ICUs) and emergency departments (EDs), where rapid and accurate diagnosis is imperative. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

22 pages, 1899 KiB  
Article
GIT-CXR: End-to-End Transformer for Chest X-Ray Report Generation
by Iustin Sîrbu, Iulia-Renata Sîrbu, Jasmina Bogojeska and Traian Rebedea
Information 2025, 16(7), 524; https://doi.org/10.3390/info16070524 - 23 Jun 2025
Cited by 1 | Viewed by 498
Abstract
Medical imaging is crucial for diagnosing, monitoring, and treating medical conditions. The medical reports of radiology images are the primary medium through which medical professionals can attest to their findings, but their writing is time-consuming and requires specialized clinical expertise. Therefore, the automated [...] Read more.
Medical imaging is crucial for diagnosing, monitoring, and treating medical conditions. The medical reports of radiology images are the primary medium through which medical professionals can attest to their findings, but their writing is time-consuming and requires specialized clinical expertise. Therefore, the automated generation of radiography reports has the potential to improve and standardize patient care and significantly reduce the workload of clinicians. Through our work, we have designed and evaluated an end-to-end transformer-based method to generate accurate and factually complete radiology reports for X-ray images. Additionally, we are the first to introduce curriculum learning for end-to-end transformers in medical imaging and demonstrate its impact in obtaining improved performance. The experiments were conducted using the MIMIC-CXR-JPG database, the largest available chest X-ray dataset. The results obtained are comparable with the current state of the art on the natural language generation (NLG) metrics BLEU and ROUGE-L, while setting new state-of-the-art results on F1 examples-averaged F1-macro and F1-micro metrics for clinical accuracy and on the METEOR metric widely used for NLG. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

28 pages, 5512 KiB  
Article
PELM: A Deep Learning Model for Early Detection of Pneumonia in Chest Radiography
by Erdem Yanar, Fırat Hardalaç and Kubilay Ayturan
Appl. Sci. 2025, 15(12), 6487; https://doi.org/10.3390/app15126487 - 9 Jun 2025
Cited by 1 | Viewed by 701
Abstract
Pneumonia remains a leading cause of respiratory morbidity and mortality, underscoring the need for rapid and accurate diagnosis to enable timely treatment and prevent complications. This study introduces PELM (Pneumonia Ensemble Learning Model), a novel deep learning framework for automated pneumonia detection using [...] Read more.
Pneumonia remains a leading cause of respiratory morbidity and mortality, underscoring the need for rapid and accurate diagnosis to enable timely treatment and prevent complications. This study introduces PELM (Pneumonia Ensemble Learning Model), a novel deep learning framework for automated pneumonia detection using chest X-ray (CXR) images. The model integrates four high-performing architectures—InceptionV3, VGG16, ResNet50, and Vision Transformer (ViT)—via feature-level concatenation to exploit complementary feature representations. A curated, large-scale dataset comprising 50,000 PA-view CXR images was assembled from NIH ChestX-ray14, CheXpert, PadChest, and Kaggle CXR Pneumonia datasets, including both pneumonia and non-pneumonia cases. To ensure fair benchmarking, all models were trained and evaluated under identical preprocessing and hyperparameter settings. PELM achieved outstanding performance, with 96% accuracy, 99% precision, 91% recall, 95% F1-score, 91% specificity, and an AUC of 0.91—surpassing individual model baselines and previously published methods. Additionally, comparative experiments were conducted using tabular clinical data from over 10,000 patients, enabling a direct evaluation of image-based and structured-data-based classification pipelines. These results demonstrate that ensemble learning with hybrid architectures significantly enhances diagnostic accuracy and generalization. The proposed approach is computationally efficient, clinically scalable, and particularly well-suited for deployment in low-resource healthcare settings, where radiologist access may be limited. PELM represents a promising advancement toward reliable, interpretable, and accessible AI-assisted pneumonia screening in global clinical practice. Full article
Show Figures

Figure 1

27 pages, 3997 KiB  
Article
NCT-CXR: Enhancing Pulmonary Abnormality Segmentation on Chest X-Rays Using Improved Coordinate Geometric Transformations
by Abu Salam, Pulung Nurtantio Andono, Purwanto, Moch Arief Soeleman, Mohamad Sidiq, Farrikh Alzami, Ika Novita Dewi, Suryanti, Eko Adhi Pangarsa, Daniel Rizky, Budi Setiawan, Damai Santosa, Antonius Gunawan Santoso, Farid Che Ghazali and Eko Supriyanto
J. Imaging 2025, 11(6), 186; https://doi.org/10.3390/jimaging11060186 - 5 Jun 2025
Viewed by 1424
Abstract
Medical image segmentation, especially in chest X-ray (CXR) analysis, encounters substantial problems such as class imbalance, annotation inconsistencies, and the necessity for accurate pathological region identification. This research aims to improve the precision and clinical reliability of pulmonary abnormality segmentation by developing NCT-CXR, [...] Read more.
Medical image segmentation, especially in chest X-ray (CXR) analysis, encounters substantial problems such as class imbalance, annotation inconsistencies, and the necessity for accurate pathological region identification. This research aims to improve the precision and clinical reliability of pulmonary abnormality segmentation by developing NCT-CXR, a framework that combines anatomically constrained data augmentation with expert-guided annotation refinement. NCT-CXR applies carefully calibrated discrete-angle rotations (±5°, ±10°) and intensity-based augmentations to enrich training data while preserving spatial and anatomical integrity. To address label noise in the NIH Chest X-ray dataset, we further introduce a clinically validated annotation refinement pipeline using the OncoDocAI platform, resulting in multi-label pixel-level segmentation masks for nine thoracic conditions. YOLOv8 was selected as the segmentation backbone due to its architectural efficiency, speed, and high spatial accuracy. Experimental results show that NCT-CXR significantly improves segmentation precision, especially for pneumothorax (0.829 and 0.804 for ±5° and ±10°, respectively). Non-parametric statistical testing (Kruskal–Wallis, H = 14.874, p = 0.0019) and post hoc Nemenyi analysis (p = 0.0138 and p = 0.0056) confirm the superiority of discrete-angle augmentation over mixed strategies. These findings underscore the importance of clinically constrained augmentation and high-quality annotation in building robust segmentation models. NCT-CXR offers a practical, high-performance solution for integrating deep learning into radiological workflows. Full article
Show Figures

Figure 1

19 pages, 2054 KiB  
Article
Enhancing Multi-Label Chest X-Ray Classification Using an Improved Ranking Loss
by Muhammad Shehzad Hanif, Muhammad Bilal, Abdullah H. Alsaggaf and Ubaid M. Al-Saggaf
Bioengineering 2025, 12(6), 593; https://doi.org/10.3390/bioengineering12060593 - 31 May 2025
Viewed by 915
Abstract
This article addresses the non-trivial problem of classifying thoracic diseases in chest X-ray (CXR) images. A single CXR image may exhibit multiple diseases, making this a multi-label classification problem. Additionally, the inherent class imbalance makes the task even more challenging as some diseases [...] Read more.
This article addresses the non-trivial problem of classifying thoracic diseases in chest X-ray (CXR) images. A single CXR image may exhibit multiple diseases, making this a multi-label classification problem. Additionally, the inherent class imbalance makes the task even more challenging as some diseases occur more frequently than others. Our methodology is based on transfer learning aiming to fine-tune a pretrained DenseNet121 model using CXR images from the NIH Chest X-ray14 dataset. Training from scratch would require a large-scale dataset containing millions of images, which is not available in the public domain for this multi-label classification task. To address class imbalance problem, we propose a rank-based loss derived from the Zero-bounded Log-sum-exp and Pairwise Rank-based (ZLPR) loss, which we refer to as focal ZLPR (FZLPR). In designing FZLPR, we draw inspiration from the focal loss where the objective is to emphasize hard-to-classify examples (instances of rare diseases) during training compared to well-classified ones. We achieve this by incorporating a “temperature” parameter to scale the label scores predicted by the model during training in the original ZLPR loss function. Experimental results on the NIH Chest X-ray14 dataset demonstrate that FZLPR loss outperforms other loss functions including binary cross entropy (BCE) and focal loss. Moreover, by using test-time augmentations, our model trained using FZLPR loss achieves an average AUC of 80.96% which is competitive with existing approaches. Full article
(This article belongs to the Special Issue Machine Learning and Deep Learning Applications in Healthcare)
Show Figures

Figure 1

16 pages, 1085 KiB  
Systematic Review
Explainable Artificial Intelligence in Radiological Cardiovascular Imaging—A Systematic Review
by Matteo Haupt, Martin H. Maurer and Rohit Philip Thomas
Diagnostics 2025, 15(11), 1399; https://doi.org/10.3390/diagnostics15111399 - 31 May 2025
Cited by 1 | Viewed by 1085
Abstract
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. [...] Read more.
Background: Artificial intelligence (AI) and deep learning are increasingly applied in cardiovascular imaging. However, the “black box” nature of these models raises challenges for clinical trust and integration. Explainable Artificial Intelligence (XAI) seeks to address these concerns by providing insights into model decision-making. This systematic review synthesizes current research on the use of XAI methods in radiological cardiovascular imaging. Methods: A systematic literature search was conducted in PubMed, Scopus, and Web of Science to identify peer-reviewed original research articles published between January 2015 and March 2025. Studies were included if they applied XAI techniques—such as Gradient-Weighted Class Activation Mapping (Grad-CAM), Shapley Additive Explanations (SHAPs), Local Interpretable Model-Agnostic Explanations (LIMEs), or saliency maps—to cardiovascular imaging modalities, including cardiac computed tomography (CT), magnetic resonance imaging (MRI), echocardiography and other ultrasound examinations, and chest X-ray (CXR). Studies focusing on nuclear medicine, structured/tabular data without imaging, or lacking concrete explainability features were excluded. Screening and data extraction followed PRISMA guidelines. Results: A total of 28 studies met the inclusion criteria. Ultrasound examinations (n = 9) and CT (n = 9) were the most common imaging modalities, followed by MRI (n = 6) and chest X-rays (n = 4). Clinical applications included disease classification (e.g., coronary artery disease and valvular heart disease) and the detection of myocardial or congenital abnormalities. Grad-CAM was the most frequently employed XAI method, followed by SHAP. Most studies used saliency-based techniques to generate visual explanations of model predictions. Conclusions: XAI holds considerable promise for improving the transparency and clinical acceptance of deep learning models in cardiovascular imaging. However, the evaluation of XAI methods remains largely qualitative, and standardization is lacking. Future research should focus on the robust, quantitative assessment of explainability, prospective clinical validation, and the development of more advanced XAI techniques beyond saliency-based methods. Strengthening the interpretability of AI models will be crucial to ensuring their safe, ethical, and effective integration into cardiovascular care. Full article
(This article belongs to the Special Issue Latest Advances and Prospects in Cardiovascular Imaging)
Show Figures

Figure 1

17 pages, 2456 KiB  
Article
The Accuracy of ChatGPT-4o in Interpreting Chest and Abdominal X-Ray Images
by Pietro G. Lacaita, Malik Galijasevic, Michael Swoboda, Leonhard Gruber, Yannick Scharll, Fabian Barbieri, Gerlig Widmann and Gudrun M. Feuchtner
J. Pers. Med. 2025, 15(5), 194; https://doi.org/10.3390/jpm15050194 - 10 May 2025
Viewed by 2540
Abstract
Background/Objectives: Large language models (LLMs), such as ChatGPT, have emerged as potential clinical support tools to enhance precision in personalized patient care, but their reliability in radiological image interpretation remains uncertain. The primary aim of our study was to evaluate the diagnostic accuracy [...] Read more.
Background/Objectives: Large language models (LLMs), such as ChatGPT, have emerged as potential clinical support tools to enhance precision in personalized patient care, but their reliability in radiological image interpretation remains uncertain. The primary aim of our study was to evaluate the diagnostic accuracy of ChatGPT-4o in interpreting chest X-rays (CXRs) and abdominal X-rays (AXRs) by comparing its performance to expert radiology findings, whilst secondary aims were diagnostic confidence and patient safety. Methods: A total of 500 X-rays, including 257 CXR (51.4%) and 243 AXR (48.5%), were analyzed. Diagnoses made by ChatGPT-4o were compared to expert interpretations. Confidence scores (1–4) were assigned and responses were evaluated for patient safety. Results: ChatGPT-4o correctly identified 345 of 500 (69%) pathologies (95% CI: 64.81–72.9). For AXRs 175 of 243 (72.02%) pathologies were correctly diagnosed (95% CI: 66.06–77.28), while for CXRs 170 of 257 (66.15%) were accurate (95% CI: 60.16–71.66). The highest detection rates among CXRs were observed for pulmonary edema, tumor, pneumonia, pleural effusion, cardiomegaly, and emphysema, and lower rates were observed for pneumothorax, rib fractures, and enlarged mediastinum. AXR performance was highest for intestinal obstruction and foreign bodies, and weaker for pneumoperitoneum, renal calculi, and diverticulitis. Confidence scores were higher for AXRs (mean 3.45 ± 1.1) than CXRs (mean 2.48 ± 1.45). All responses (100%) were considered to be safe for the patient. Interobserver agreement was high (kappa = 0.920), and reliability (second prompt) was moderate (kappa = 0.750). Conclusions: ChatGPT-4o demonstrated moderate accuracy for the interpretation of X-rays, being higher for AXRs compared to CXRs. Improvements are required for its use as efficient clinical support tool. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

13 pages, 1323 KiB  
Protocol
Lung Elastance and Microvascularization as Quantitative Non-Invasive Biomarkers for the Aetiological Diagnosis of Lung Consolidations in Children (ELASMIC Study)
by Sergi Huerta-Calpe, Carmina Guitart, Josep Lluis Carrasco, Bárbara Salas, Francisco José Cambra, Iolanda Jordan and Mònica Balaguer
Diagnostics 2025, 15(7), 910; https://doi.org/10.3390/diagnostics15070910 - 2 Apr 2025
Viewed by 558
Abstract
Background: Acute lower respiratory tract conditions are highly prevalent in paediatrics. Many of these conditions present as consolidations on imaging studies. One of the most common causes is bacterial pneumonia (BP), which requires an accurate diagnosis to implement the best treatment plan. Despite [...] Read more.
Background: Acute lower respiratory tract conditions are highly prevalent in paediatrics. Many of these conditions present as consolidations on imaging studies. One of the most common causes is bacterial pneumonia (BP), which requires an accurate diagnosis to implement the best treatment plan. Despite the fact that major guidelines constrain the use of invasive tests, chest X-ray (CXR) or blood tests are still routinely used for the diagnosis. In this regard, the introduction of lung ultrasound (LUS) signified an advancement in reducing the invasiveness of diagnosis. However, there are still situations where distinguishing between different aetiologies remains challenging, especially in the case of atelectasis. Methods: This is a prospective cohort study to assess the diagnostic accuracy of new non-invasive, quantifiable, and reproducible imaging biomarkers (lung elastance and microvascularization ratio) for differentiating BP from another major entity that causes the appearance of consolidation in imaging tests, atelectasis. It will be conducted at Sant Joan de Déu Hospital in Spain from June 2025 to June 2027. Firstly, imaging biomarkers will be measured in well-aerated lung tissue without consolidation to establish their values in healthy lung tissue, according to a predefined imaging acquisition protocol. Subsequently, the imaging biomarkers will be assessed in patients with confirmed lung consolidation by LUS (Group 1: BP; Group 2: atelectasis). Results: The study aims to determine whether there are statistically significant differences in the biomarker values in relation to the normal values and between the different etiological groups. Conclusions: The demonstration of the reliable diagnostic accuracy of these biomarkers could significantly reduce the need for invasive techniques and improve the therapeutic management of many patients with BP and other pulmonary conditions presenting with consolidation in imaging tests. Full article
(This article belongs to the Special Issue Recent Developments and Future Trends in Thoracic Imaging)
Show Figures

Figure 1

20 pages, 3983 KiB  
Article
Clinicians’ Agreement on Extrapulmonary Radiographic Findings in Chest X-Rays Using a Diagnostic Labelling Scheme
by Lea Marie Pehrson, Dana Li, Alyas Mayar, Marco Fraccaro, Rasmus Bonnevie, Peter Jagd Sørensen, Alexander Malcom Rykkje, Tobias Thostrup Andersen, Henrik Steglich-Arnholm, Dorte Marianne Rohde Stærk, Lotte Borgwardt, Sune Darkner, Jonathan Frederik Carlsen, Michael Bachmann Nielsen and Silvia Ingala
Diagnostics 2025, 15(7), 902; https://doi.org/10.3390/diagnostics15070902 - 1 Apr 2025
Viewed by 527
Abstract
Objective: Reliable reading and annotation of chest X-ray (CXR) images are essential for both clinical decision-making and AI model development. While most of the literature emphasizes pulmonary findings, this study evaluates the consistency and reliability of annotations for extrapulmonary findings, using a labelling [...] Read more.
Objective: Reliable reading and annotation of chest X-ray (CXR) images are essential for both clinical decision-making and AI model development. While most of the literature emphasizes pulmonary findings, this study evaluates the consistency and reliability of annotations for extrapulmonary findings, using a labelling scheme. Methods: Six clinicians with varying experience levels (novice, intermediate, and experienced) annotated 100 CXR images using a diagnostic labelling scheme, in two rounds, separated by a three-week washout period. Annotation consistency was assessed using Randolph’s free-marginal kappa (RK), prevalence- and bias-adjusted kappa (PABAK), proportion positive agreement (PPA), and proportion negative agreement (PNA). Pairwise comparisons and the McNemar’s test were conducted to assess inter-reader and intra-reader agreement. Results: PABAK values indicated high overall grouped labelling agreement (novice: 0.86, intermediate: 0.90, experienced: 0.91). PNA values demonstrated strong agreement on negative findings, while PPA values showed moderate-to-low consistency in positive findings. Significant differences in specific agreement emerged between novice and experienced clinicians for eight labels, but there were no significant variations in RK across experience levels. The McNemar’s test confirmed annotation stability between rounds. Conclusions: This study demonstrates that clinician annotations of extrapulmonary findings in CXR are consistent and reliable across different experience levels using a pre-defined diagnostic labelling scheme. These insights aid in optimizing training strategies for both clinicians and AI models. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 2881 KiB  
Article
CXR-Seg: A Novel Deep Learning Network for Lung Segmentation from Chest X-Ray Images
by Sadia Din, Muhammad Shoaib and Erchin Serpedin
Bioengineering 2025, 12(2), 167; https://doi.org/10.3390/bioengineering12020167 - 10 Feb 2025
Cited by 1 | Viewed by 2291
Abstract
Over the past decade, deep learning techniques, particularly neural networks, have become essential in medical imaging for tasks like image detection, classification, and segmentation. These methods have greatly enhanced diagnostic accuracy, enabling quicker identification and more effective treatments. In chest X-ray analysis, however, [...] Read more.
Over the past decade, deep learning techniques, particularly neural networks, have become essential in medical imaging for tasks like image detection, classification, and segmentation. These methods have greatly enhanced diagnostic accuracy, enabling quicker identification and more effective treatments. In chest X-ray analysis, however, challenges remain in accurately segmenting and classifying organs such as the lungs, heart, diaphragm, sternum, and clavicles, as well as detecting abnormalities in the thoracic cavity. Despite progress, these issues highlight the need for improved approaches to overcome segmentation difficulties and enhance diagnostic reliability. In this context, we propose a novel architecture named CXR-Seg, tailored for semantic segmentation of lungs from chest X-ray images. The proposed network mainly consists of four components, including a pre-trained EfficientNet as an encoder to extract feature encodings, a spatial enhancement module embedded in the skip connection to promote the adjacent feature fusion, a transformer attention module at the bottleneck layer, and a multi-scale feature fusion block at the decoder. The performance of the proposed CRX-Seg was evaluated on four publicly available datasets (MC, Darwin, and Shenzhen for chest X-rays, and TCIA for brain flair segmentation from MRI images). The proposed method achieved a Jaccard index, Dice coefficient, accuracy, sensitivity, and specificity of 95.63%, 97.76%, 98.77%, 98.00%, and 99.05%on MC; 91.66%, 95.62%, 96.35%, 95.53%, and 96.94% on V7 Darwin COVID-19; and 92.97%, 96.32%, 96.69%, 96.01%, and 97.40% on the Shenzhen Tuberculosis CXR Dataset, respectively. Conclusively, the proposed network offers improved performance in comparison with state-of-the-art methods, and better generalization for the semantic segmentation of lungs from chest X-ray images. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

15 pages, 2930 KiB  
Article
Anatomically Guided Deep Learning System for Right Internal Jugular Line (RIJL) Segmentation and Tip Localization in Chest X-Ray
by Siyuan Wei, Liza Shrestha, Gabriel Melendez-Corres and Matthew S. Brown
Life 2025, 15(2), 201; https://doi.org/10.3390/life15020201 - 29 Jan 2025
Viewed by 1107
Abstract
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a [...] Read more.
The right internal jugular line (RIJL) is a type of central venous catheter (CVC) inserted into the right internal jugular vein to deliver medications and monitor vital functions in ICU patients. The placement of RIJL is routinely checked by a clinician in a chest X-ray (CXR) image to ensure its proper function and patient safety. To reduce the workload of clinicians, deep learning-based automated detection algorithms have been developed to detect CVCs in CXRs. Although RIJL is the most widely used type of CVCs, there is a paucity of investigations focused on its accurate segmentation and tip localization. In this study, we propose a deep learning system that integrates an anatomical landmark segmentation, an RIJL segmentation network, and a postprocessing function to segment the RIJL course and detect the tip with accuracy and precision. We utilized the nnU-Net framework to configure the segmentation network. The entire system was implemented on the SimpleMind Cognitive AI platform, enabling the integration of anatomical knowledge and spatial reasoning to model relationships between objects within the image. Specifically, the trachea was used as an anatomical landmark to extract a subregion in a CXR image that is most relevant to the RIJL. The subregions were used to generate cropped images, which were used to train the segmentation network. The segmentation results were recovered to original dimensions, and the most inferior point’s coordinates in each image were defined as the tip. With guidance from the anatomical landmark and customized postprocessing, the proposed method achieved improved segmentation and tip localization compared to the baseline segmentation network: the mean average symmetric surface distance (ASSD) was decreased from 2.72 to 1.41 mm, and the mean tip distance was reduced from 11.27 to 8.29 mm. Full article
(This article belongs to the Special Issue Current Progress in Medical Image Segmentation)
Show Figures

Figure 1

19 pages, 4635 KiB  
Article
ZooCNN: A Zero-Order Optimized Convolutional Neural Network for Pneumonia Classification Using Chest Radiographs
by Saravana Kumar Ganesan, Parthasarathy Velusamy, Santhosh Rajendran, Ranjithkumar Sakthivel, Manikandan Bose and Baskaran Stephen Inbaraj
J. Imaging 2025, 11(1), 22; https://doi.org/10.3390/jimaging11010022 - 13 Jan 2025
Cited by 1 | Viewed by 1383
Abstract
Pneumonia, a leading cause of mortality in children under five, is usually diagnosed through chest X-ray (CXR) images due to its efficiency and cost-effectiveness. However, the shortage of radiologists in the Least Developed Countries (LDCs) emphasizes the need for automated pneumonia diagnostic systems. [...] Read more.
Pneumonia, a leading cause of mortality in children under five, is usually diagnosed through chest X-ray (CXR) images due to its efficiency and cost-effectiveness. However, the shortage of radiologists in the Least Developed Countries (LDCs) emphasizes the need for automated pneumonia diagnostic systems. This article presents a Deep Learning model, Zero-Order Optimized Convolutional Neural Network (ZooCNN), a Zero-Order Optimization (Zoo)-based CNN model for classifying CXR images into three classes, Normal Lungs (NL), Bacterial Pneumonia (BP), and Viral Pneumonia (VP); this model utilizes the Adaptive Synthetic Sampling (ADASYN) approach to ensure class balance in the Kaggle CXR Images (Pneumonia) dataset. Conventional CNN models, though promising, face challenges such as overfitting and have high computational costs. The use of ZooPlatform (ZooPT), a hyperparameter finetuning strategy, on a baseline CNN model finetunes the hyperparameters and provides a modified architecture, ZooCNN, with a 72% reduction in weights. The model was trained, tested, and validated on the Kaggle CXR Images (Pneumonia) dataset. The ZooCNN achieved an accuracy of 97.27%, a sensitivity of 97.00%, a specificity of 98.60%, and an F1 score of 97.03%. The results were compared with contemporary models to highlight the efficacy of the ZooCNN in pneumonia classification (PC), offering a potential tool to aid physicians in clinical settings. Full article
Show Figures

Figure 1

Back to TopTop