Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (347)

Search Parameters:
Keywords = CXR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1019 KB  
Systematic Review
Artificial Intelligence for Detecting Aortic Arch Calcification on Chest Radiographs: A Systematic Review
by Krzysztof Żerdziński, Julita Janiec, Maja Dreger, Piotr Dudek, Iga Paszkiewicz, Adam Mitręga, Michał Bielówka, Alicja Nawrat, Jakub Kufel and Marcin Rojek
Diagnostics 2026, 16(2), 243; https://doi.org/10.3390/diagnostics16020243 - 12 Jan 2026
Abstract
Background/Objectives: Aortic-arch calcification (AAC) is a robust predictor of cardiovascular events often overlooked on routine chest radiographs (CXR). This systematic review aimed to evaluate the diagnostic accuracy of artificial intelligence (AI) models for detecting AAC on CXR and assess their potential for [...] Read more.
Background/Objectives: Aortic-arch calcification (AAC) is a robust predictor of cardiovascular events often overlooked on routine chest radiographs (CXR). This systematic review aimed to evaluate the diagnostic accuracy of artificial intelligence (AI) models for detecting AAC on CXR and assess their potential for clinical implementation. Methods: The review followed PRISMA 2020 guidelines (PROSPERO: CRD420251208627). A search of Embase, PubMed, Scopus, and Web of Science was conducted (Jan 2020–Oct 2025) for studies evaluating AI models detecting AAC in adults. Bias was assessed using QUADAS-2. Due to methodological heterogeneity, a narrative synthesis was performed instead of a meta-analysis. Results: Out of 115 records, three retrospective studies (2022–2024) utilizing CNNs across ~2.7 million images were included. Models demonstrated high diagnostic discrimination (AUROC 0.81–0.99), though performance estimates were often attenuated in external cohorts. Pronounced sensitivity–specificity trade-offs occurred: one model achieved 95.9% recall, while another exhibited near-perfect specificity (0.99) despite markedly low sensitivity (0.22). Although the risk of bias was predominantly low, the overall GRADE certainty remained low due to methodological heterogeneity and the absence of cross-sectional imaging reference standards. Conclusions: Deep learning-based models reliably detect AAC on routine CXR, offering a scalable tool for opportunistic cardiovascular risk stratification. However, significant heterogeneity in model architectures and validation strategies currently limits broad comparability. Future research requires standardized annotation protocols and external validation to ensure clinical generalizability. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 1318 KB  
Article
A Retrospective Observational Study of Pulmonary Impairments in Long COVID Patients
by Lanre Peter Daodu, Yogini Raste, Judith E. Allgrove, Francesca I. F. Arrigoni and Reem Kayyali
Biomedicines 2026, 14(1), 145; https://doi.org/10.3390/biomedicines14010145 - 10 Jan 2026
Viewed by 126
Abstract
Background/Objective: Pulmonary impairments have been identified as some of the most complex and debilitating post-acute sequelae of SARS-CoV-2 infection (PASC) or long COVID. This study identified and characterised the specific forms of pulmonary impairments detected using pulmonary function tests (PFT), chest X-rays (CXR), [...] Read more.
Background/Objective: Pulmonary impairments have been identified as some of the most complex and debilitating post-acute sequelae of SARS-CoV-2 infection (PASC) or long COVID. This study identified and characterised the specific forms of pulmonary impairments detected using pulmonary function tests (PFT), chest X-rays (CXR), and computed tomography (CT) scans in patients with long COVID symptoms. Methods: We conducted a single-centre retrospective study to evaluate 60 patients with long COVID who underwent PFT, CXR, and CT scans. Pulmonary function in long COVID patients was assessed using defined thresholds for key test parameters, enabling categorisation into normal, restrictive, obstructive, and mixed lung-function patterns. We applied exact binomial (Clopper–Pearson) 95% confidence intervals to calculate the proportions of patients falling below the defined thresholds. We also assessed the relationships among spirometric indices, lung volumes, and diffusion capacity (DLCO) using scatter plots and corresponding linear regressions. The findings from the CXRs and CT scans were categorised, and their prevalence was calculated. Results: A total of 60 patients with long COVID symptoms (mean age 60 ± 13 years; 57% female) were evaluated. The cohort was ethnically diverse and predominantly non-smokers, with a mean BMI of 32.4 ± 6.3 kg/m2. PFT revealed that most patients had preserved spirometry, with mean Forced Expiratory Volume in 1 Second (FEV1) and Forced Vital Capacity (FVC) above 90% predicted. However, a significant proportion exhibited reductions in lung volumes, with total lung capacity (TLC) decreasing in 35%, and diffusion capacity (DLCO/TLCO) decreasing in 75%. Lung function pattern analysis showed 88% of patients had normal function, while 12% displayed a restrictive pattern; no obstructive or mixed patterns were observed. Radiographic assessment revealed that 58% of chest X-rays were normal, whereas CT scans showed ground-glass opacities (GGO) in 65% of patients and fibrotic changes in 55%, along with findings such as atelectasis, air trapping, and bronchial wall thickening. Conclusions: Spirometry alone is insufficient to detect impairment of gas exchange or underlying histopathological changes in patients with long COVID. Our findings show that, despite normal spirometry results, many patients exhibit significant diffusion impairment, fibrotic alterations, and ground-glass opacities, indicating persistent lung and microvascular damage. These results underscore the importance of comprehensive assessment using multiple diagnostic tools to identify and manage chronic pulmonary dysfunction in long COVID. Full article
Show Figures

Graphical abstract

20 pages, 465 KB  
Article
Cross-Assessment & Verification for Evaluation (CAVe) Framework for AI Risk and Compliance Assessment Using a Cross-Compliance Index (CCI)
by Cheon-Ho Min, Dae-Geun Lee and Jin Kwak
Electronics 2026, 15(2), 307; https://doi.org/10.3390/electronics15020307 - 10 Jan 2026
Viewed by 90
Abstract
This study addresses the challenge of evaluating artificial intelligence (AI) systems across heterogeneous regulatory frameworks. Although the NIST AI RMF, EU AI Act, and ISO/IEC 23894/42001 define important governance requirements, they do not provide a unified quantitative method. To bridge this gap, we [...] Read more.
This study addresses the challenge of evaluating artificial intelligence (AI) systems across heterogeneous regulatory frameworks. Although the NIST AI RMF, EU AI Act, and ISO/IEC 23894/42001 define important governance requirements, they do not provide a unified quantitative method. To bridge this gap, we propose the Cross-Assessment & Verification for Evaluation (CAVe) Framework, which maps shared regulatory requirements to four measurable indicators—accuracy, robustness, privacy, and fairness— and aggregates them into a Cross-Compliance Index (CCI) using normalization, thresholding, evidence penalties, and cross-framework weighting. Two validation scenarios demonstrate the applicability of the approach. The first scenario evaluates a Naïve Bayes-based spam classifier trained on the public UCI SMS Spam Collection dataset, representing a low-risk text-classification setting. The model achieved accuracy 0.9850, robustness 0.9945, fairness 0.9908, and privacy 0.9922, resulting in a CCI of 0.9741 (Pass). The second scenario examines a high-risk healthcare AI system using a CheXNet-style convolutional model evaluated on the MIMIC-CXR dataset. Diagnostic accuracy, distribution-shift robustness, group fairness (finding-specific group comparison), and privacy risk (membership-inference susceptibility) yielded 0.7680, 0.7974, 0.9070, and 0.7500 respectively. Under healthcare-oriented weighting and safety thresholds, the CCI was 0.5046 (Fail). These results show how identical evaluation principles produce different compliance outcomes depending on domain risk and regulatory priorities. Overall, CAVe provides a transparent, reproducible mechanism for aligning technical performance with regulatory expectations across diverse domains. Additional metric definitions and parameter settings are provided in the manuscript to support reproducibility, and future extensions will incorporate higher-level indicators such as transparency and human oversight. Full article
(This article belongs to the Special Issue Artificial Intelligence Safety and Security)
Show Figures

Figure 1

33 pages, 4219 KB  
Review
Recent Progress in Deep Learning for Chest X-Ray Report Generation
by Mounir Salhi and Moulay A. Akhloufi
BioMedInformatics 2026, 6(1), 3; https://doi.org/10.3390/biomedinformatics6010003 - 9 Jan 2026
Viewed by 117
Abstract
Chest X-ray radiology report generation is a challenging task that involves techniques from medical natural language processing and computer vision. This paper provides a comprehensive overview of recent progress. The annotation protocols, structure, linguistic characteristics, and size of the main public datasets are [...] Read more.
Chest X-ray radiology report generation is a challenging task that involves techniques from medical natural language processing and computer vision. This paper provides a comprehensive overview of recent progress. The annotation protocols, structure, linguistic characteristics, and size of the main public datasets are presented and compared. Understanding their properties is necessary for benchmarking and generalization. Both clinically oriented and natural language generation metrics are included in the model evaluation strategies to assess their performance. Their respective strengths and limitations are discussed in the context of radiology applications. Recent deep learning approaches for report generation and their different architectures are also reviewed. Common trends such as instruction tuning and the integration of clinical knowledge are also considered. Recent works show that current models still have limited factual accuracy, with a score of 72% reported with expert evaluations, and poor performance on rare pathologies and lateral views. The most important challenges are the limited dataset diversity, weak cross-institution generalization, and the lack of clinically validated benchmarks for evaluating factual reliability. Finally, we discuss open challenges related to data quality, clinical factuality, and interpretability. This review aims to support researchers by synthesizing the current literature and identifying key directions for developing more clinically reliable report generation systems. Full article
Show Figures

Graphical abstract

28 pages, 3824 KB  
Article
Comparison Between Early and Intermediate Fusion of Multimodal Techniques: Lung Disease Diagnosis
by Ahad Alloqmani and Yoosef B. Abushark
AI 2026, 7(1), 16; https://doi.org/10.3390/ai7010016 - 7 Jan 2026
Viewed by 161
Abstract
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph [...] Read more.
Early and accurate diagnosis of lung diseases is essential for effective treatment and patient management. Conventional diagnostic models trained on a single data type often miss important clinical information. This study explored a multimodal deep learning framework that integrates cough sounds, chest radiograph (X-rays), and computed tomography (CT) scans to enhance disease classification performance. Two fusion strategies, early and intermediate fusion, were implemented and evaluated against three single-modality baselines. The dataset was collected from different sources. Each dataset underwent preprocessing steps, including noise removal, grayscale conversion, image cropping, and class balancing, to ensure data quality. Convolutional neural network (CNN) and Extreme Inception (Xception) architectures were used for feature extraction and classification. The results show that multimodal learning achieves superior performance compared with single models. The intermediate fusion model achieved 98% accuracy, while the early fusion model reached 97%. In contrast, single CXR and CT models achieved 94%, and the cough sound model achieved 79%. These results confirm that multimodal integration, particularly intermediate fusion, offers a more reliable framework for automated lung disease diagnosis. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

11 pages, 566 KB  
Article
Impact of the COVID-19 Pandemic on Emergency Department Practices for Cardiopulmonary Symptoms
by Ki Hong Kim, Jae Yun Jung, Hayoung Kim, Joong Wan Park and Yong Hee Lee
J. Clin. Med. 2026, 15(2), 458; https://doi.org/10.3390/jcm15020458 - 7 Jan 2026
Viewed by 94
Abstract
Objectives: The purpose of this study was to evaluate the trends and changes in the time to medical imaging in the emergency department (ED) for patients with cardiopulmonary symptoms during the coronavirus disease 2019 (COVID-19) pandemic. Methods: The retrospective observational study was conducted [...] Read more.
Objectives: The purpose of this study was to evaluate the trends and changes in the time to medical imaging in the emergency department (ED) for patients with cardiopulmonary symptoms during the coronavirus disease 2019 (COVID-19) pandemic. Methods: The retrospective observational study was conducted from the clinical database of a tertiary academic teaching hospital. Patients with cardiopulmonary symptoms (chest pain, dyspnea, palpitation and syncope) who visited an adult ED between January 2018 and December 2021 were included. The primary outcome was the time to medical imaging, including chest X-ray (CXR), chest computed tomography (CT), and focused cardiac ultrasound (FOCUS). The primary exposure was the date of the ED visit during the COVID-19 pandemic (from 1 March 2020 to 31 December 2021). Results: Among the 28,213 patients, 17,260 (61.2%) were in the pre-COVID-19 group, and 10,953 (38.8%) were in the COVID-19 group. The time to medical imaging was delayed in the COVID-19 group compared with the pre-COVID-19 group: the time to FOCUS was 9 min, the time to CXR was 6 min, and the time to chest CT was 115 min. Conclusions: We found that the time to medical imaging for patients with cardiopulmonary symptoms who visited the ED was significantly delayed during the COVID-19 pandemic. Full article
(This article belongs to the Section Emergency Medicine)
Show Figures

Figure 1

22 pages, 1755 KB  
Article
Knowledge-Augmented Adaptive Mechanism for Radiology Report Generation
by Shuo Yang and Hengliang Tan
Mathematics 2026, 14(1), 173; https://doi.org/10.3390/math14010173 - 2 Jan 2026
Viewed by 173
Abstract
Radiology report generation, which aims to relieve the heavy workload of radiologists and reduce the risks of misdiagnosis and overlooked diagnoses, is of great significance in current clinical medicine. Most existing methods mainly formulate radiology report generation as a problem similar to image [...] Read more.
Radiology report generation, which aims to relieve the heavy workload of radiologists and reduce the risks of misdiagnosis and overlooked diagnoses, is of great significance in current clinical medicine. Most existing methods mainly formulate radiology report generation as a problem similar to image captioning. Nevertheless, in the medical domain, these data-driven methods are plagued by two key issues: the insufficient utilization of expert knowledge and visual–textual biases. To solve these problems, this study presents a novel knowledge-augmented adaptive mechanism (KAM) for radiology report generation. In detail, our KAM first introduces two distinct types of medical knowledge: prior knowledge, which is input-independent and reflects the accumulated expertise of radiologists, and posterior knowledge, which is input-dependent and mimics the process of identifying abnormalities, thereby mitigating the issue of visual–textual bias. To optimize the utilization of both types of knowledge, this study develops a knowledge-augmented adaptive mechanism, which integrates the visual characteristics of radiological images with prior and posterior knowledge into the decoding process. Experimental evaluations on the publicly accessible IU X-ray and MIMIC-CXR datasets indicate that our approach is on par with the current common methods. Full article
Show Figures

Figure 1

22 pages, 1494 KB  
Article
Leveraging Large-Scale Public Data for Artificial Intelligence-Driven Chest X-Ray Analysis and Diagnosis
by Farzeen Khalid Khan, Waleed Bin Tahir, Mu Sook Lee, Jin Young Kim, Shi Sub Byon, Sun-Woo Pi and Byoung-Dai Lee
Diagnostics 2026, 16(1), 146; https://doi.org/10.3390/diagnostics16010146 - 1 Jan 2026
Viewed by 302
Abstract
Background: Chest X-ray (CXR) imaging is crucial for diagnosing thoracic abnormalities; however, the rising demand burdens radiologists, particularly in resource-limited settings. Method: We used large-scale, diverse public CXR datasets with noisy labels to train general-purpose deep learning models (ResNet, DenseNet, EfficientNet, [...] Read more.
Background: Chest X-ray (CXR) imaging is crucial for diagnosing thoracic abnormalities; however, the rising demand burdens radiologists, particularly in resource-limited settings. Method: We used large-scale, diverse public CXR datasets with noisy labels to train general-purpose deep learning models (ResNet, DenseNet, EfficientNet, and DLAD-10) for multi-label classification of thoracic conditions. Uncertainty quantification was incorporated to assess model reliability. Performance was evaluated on both internal and external validation sets, with analyses of data scale, diversity, and fine-tuning effects. Result: EfficientNet achieved the highest overall area under the receiver operating characteristic curve (0.8944) with improved sensitivity and F1-score. Moreover, as training data volume increased—particularly using multi-source datasets—both diagnostic performance and generalizability were enhanced. Although larger datasets reduced predictive uncertainty, conditions such as tuberculosis remained challenging due to limited high-quality samples. Conclusions: General-purpose deep learning models can achieve robust CXR diagnostic performance when trained on large-scale, diverse public datasets despite noisy labels. However, further targeted strategies are needed for underrepresented conditions. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

20 pages, 7543 KB  
Article
Contrastive Learning with Feature Space Interpolation for Retrieval-Based Chest X-Ray Report Generation
by Zahid Ur Rahman, Gwanghyun Yu, Lee Jin and Jin Young Kim
Appl. Sci. 2026, 16(1), 470; https://doi.org/10.3390/app16010470 - 1 Jan 2026
Viewed by 388
Abstract
Automated radiology report generation from chest X-rays presents a critical challenge in medical imaging. Traditional image-captioning models struggle with clinical specificity and rare pathologies. Recently, contrastive vision language learning has emerged as a robust alternative that learns joint visual–textual representations. However, applying contrastive [...] Read more.
Automated radiology report generation from chest X-rays presents a critical challenge in medical imaging. Traditional image-captioning models struggle with clinical specificity and rare pathologies. Recently, contrastive vision language learning has emerged as a robust alternative that learns joint visual–textual representations. However, applying contrastive learning (CL) to radiology remains challenging due to severe data scarcity. Prior work has employed input space augmentation, but these approaches incur computational overhead and risk distorting diagnostic features. This work presents CL with feature space interpolation for retrieval (CLFIR), a novel CL framework operating on learned embeddings. The method generates interpolated pairs in the feature embedding space by mixing original and shuffled embeddings in batches using a mixing coefficient λU(0.85,0.99). This approach increases batch diversity via synthetic samples, addressing the limitations of CL on medical data while preserving diagnostic integrity. Extensive experiments demonstrate state-of-the-art performance across critical clinical validation tasks. For report generation, CLFIR achieves BLEU-1/ROUGE/METEOR scores of 0.51/0.40/0.26 (Indiana university [IU] X-ray) and 0.45/0.34/0.22 (MIMIC-CXR). Moreover, CLFIR excels at image-to-text retrieval with R@1 scores of 4.14% (IU X-ray) and 24.3% (MIMIC-CXR) and achieves 0.65 accuracy in zero-shot classification on the CheXpert5×200 dataset, surpassing the established vision-language models. Full article
Show Figures

Figure 1

23 pages, 4108 KB  
Article
Adaptive Normalization Enhances the Generalization of Deep Learning Model in Chest X-Ray Classification
by Jatsada Singthongchai and Tanachapong Wangkhamhan
J. Imaging 2026, 12(1), 14; https://doi.org/10.3390/jimaging12010014 - 28 Dec 2025
Viewed by 362
Abstract
This study presents a controlled benchmarking analysis of min–max scaling, Z-score normalization, and an adaptive preprocessing pipeline that combines percentile-based ROI cropping with histogram standardization. The evaluation was conducted across four public chest X-ray (CXR) datasets and three convolutional neural network architectures under [...] Read more.
This study presents a controlled benchmarking analysis of min–max scaling, Z-score normalization, and an adaptive preprocessing pipeline that combines percentile-based ROI cropping with histogram standardization. The evaluation was conducted across four public chest X-ray (CXR) datasets and three convolutional neural network architectures under controlled experimental settings. The adaptive pipeline generally improved accuracy, F1-score, and training stability on datasets with relatively stable contrast characteristics while yielding limited gains on MIMIC-CXR due to strong acquisition heterogeneity. Ablation experiments showed that histogram standardization provided the primary performance contribution, with ROI cropping offering complementary benefits, and the full pipeline achieving the best overall performance. The computational overhead of the adaptive preprocessing was minimal (+6.3% training-time cost; 5.2 ms per batch). Friedman–Nemenyi and Wilcoxon signed-rank tests confirmed that the observed improvements were statistically significant across most dataset–model configurations. Overall, adaptive normalization is positioned not as a novel algorithmic contribution, but as a practical preprocessing design choice that can enhance cross-dataset robustness and reliability in chest X-ray classification workflows. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Medical Imaging Applications)
Show Figures

Figure 1

16 pages, 2601 KB  
Article
Diagnostic Accuracy of an Offline CNN Framework Utilizing Multi-View Chest X-Rays for Screening 14 Co-Occurring Communicable and Non-Communicable Diseases
by Latika Giri, Pradeep Raj Regmi, Ghanshyam Gurung, Grusha Gurung, Shova Aryal, Sagar Mandal, Samyam Giri, Sahadev Chaulagain, Sandip Acharya and Muhammad Umair
Diagnostics 2026, 16(1), 66; https://doi.org/10.3390/diagnostics16010066 - 24 Dec 2025
Viewed by 397
Abstract
Background: Chest radiography is the most widely used diagnostic imaging modality globally, yet its interpretation is hindered by a critical shortage of radiologists, especially in low- and middle-income countries (LMICs). The interpretation is both time-consuming and error-prone in high-volume settings. Artificial Intelligence (AI) [...] Read more.
Background: Chest radiography is the most widely used diagnostic imaging modality globally, yet its interpretation is hindered by a critical shortage of radiologists, especially in low- and middle-income countries (LMICs). The interpretation is both time-consuming and error-prone in high-volume settings. Artificial Intelligence (AI) systems trained on public data may lack generalizability to multi-view, real-world, local images. Deep learning tools have the potential to augment radiologists by providing real-time decision support by overcoming these. Objective: We evaluated the diagnostic accuracy of a deep learning-based convolutional neural network (CNN) trained on multi-view, hybrid (public and local datasets) for detecting thoracic abnormalities in chest radiographs of adults presenting to a tertiary hospital, operating in offline mode. Methodology: A CNN was pretrained on public datasets (Vin Big, NIH) and fine-tuned on a local dataset from a Nepalese tertiary hospital, comprising frontal (PA/AP) and lateral views from emergency, ICU, and outpatient settings. The dataset was annotated by three radiologists for 14 pathologies. Data augmentation simulated poor-quality images and artifacts. Performance was evaluated on a held-out test set (N = 522) against radiologists’ consensus, measuring AUC, sensitivity, specificity, mean average precision (mAP), and reporting time. Deployment feasibility was tested via PACS integration and standalone offline mode. Results: The CNN achieved an overall AUC of 0.86 across 14 abnormalities, with 68% sensitivity, 99% specificity, and 0.93 mAP. Colored bounding boxes improved clarity when multiple pathologies co-occurred (e.g., cardiomegaly with effusion). The system performed effectively on PA, AP, and lateral views, including poor-quality ER/ICU images. Deployment testing confirmed seamless PACS integration and offline functionality. Conclusions: The CNN trained on adult CXRs performed reliably in detecting key thoracic findings across varied clinical settings. Its robustness to image quality, integration of multiple views and visualization capabilities suggest it could serve as a useful aid for triage and diagnosis. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

27 pages, 22957 KB  
Article
Lung Disease Classification Using Deep Learning and ROI-Based Chest X-Ray Images
by Antonio Nadal-Martínez, Lidia Talavera-Martínez, Marc Munar and Manuel González-Hidalgo
Technologies 2026, 14(1), 1; https://doi.org/10.3390/technologies14010001 - 19 Dec 2025
Viewed by 394
Abstract
Deep learning applied to chest X-ray (CXR) images has gained wide attention for its potential to improve diagnostic accuracy and accessibility in resource-limited healthcare settings. This study compares two deep learning strategies for lung disease classification: a Two-Stage approach that first detects abnormalities [...] Read more.
Deep learning applied to chest X-ray (CXR) images has gained wide attention for its potential to improve diagnostic accuracy and accessibility in resource-limited healthcare settings. This study compares two deep learning strategies for lung disease classification: a Two-Stage approach that first detects abnormalities before classifying specific pathologies and a Direct multiclass classification approach. Using a curated database of CXR images covering diverse lung diseases, including COVID-19, pneumonia, pulmonary fibrosis, and tuberculosis, we evaluate the performance of various convolutional neural network architectures, the impact of lung segmentation, and explainability techniques. Our results show that the Two-Stage framework achieves higher diagnostic performance and fewer false positives than the Direct approach. Additionally, we highlight the limitations of segmentation and data augmentation techniques, emphasizing the need for further advancements in explainability and robust model design to support real-world diagnostic applications. Finally, we conduct a complementary evaluation of bone suppression techniques to assess their potential impact on disease classification performance. Full article
Show Figures

Figure 1

22 pages, 2503 KB  
Article
COPD Multi-Task Diagnosis on Chest X-Ray Using CNN-Based Slot Attention
by Wangsu Jeon, Hyeonung Jang, Hongchang Lee and Seongjun Choi
Appl. Sci. 2026, 16(1), 14; https://doi.org/10.3390/app16010014 - 19 Dec 2025
Viewed by 456
Abstract
This study proposes a unified deep-learning framework for the concurrent classification of Chronic Obstructive Pulmonary Disease (COPD) severity and regression of the FEV1/FVC ratio from chest X-ray (CXR) images. We integrated a ConvNeXt-Large backbone with a Slot Attention mechanism to effectively disentangle and [...] Read more.
This study proposes a unified deep-learning framework for the concurrent classification of Chronic Obstructive Pulmonary Disease (COPD) severity and regression of the FEV1/FVC ratio from chest X-ray (CXR) images. We integrated a ConvNeXt-Large backbone with a Slot Attention mechanism to effectively disentangle and refine disease-relevant features for multi-task learning. Evaluation on a clinical dataset demonstrated that the proposed model with a 5-slot configuration achieved superior performance compared to standard CNN and Vision Transformer baselines. On the independent test set, the model attained an Accuracy of 0.9107, Sensitivity of 0.8603, and Specificity of 0.9324 for three-class severity stratification. Simultaneously, it achieved a Mean Absolute Error (MAE) of 8.2649 and a Mean Squared Error (MSE) of 151.4704, and an R2 of 0.7591 for FEV1/FVC ratio estimation. Qualitative analysis using saliency maps also suggested that the slot-based approach contributes to attention patterns that are more constrained to clinically relevant pulmonary structures. These results suggest that our slot-attention-based multi-task model offers a robust solution for automated COPD assessment from standard radiographs. Full article
Show Figures

Figure 1

14 pages, 2342 KB  
Article
Integrating AI with PCR for Tuberculosis Diagnosis: Evaluating a Deep Learning Model for Chest X-Rays
by Wei-Cheng Chiu, Shan-Yueh Chang, Chin Lin, Teng-Wei Chen and Wen-Hui Fang
Bioengineering 2025, 12(12), 1377; https://doi.org/10.3390/bioengineering12121377 - 18 Dec 2025
Viewed by 628
Abstract
Tuberculosis (TB) remains a major global health challenge, and early, accurate diagnosis is essential for effective disease control. Chest radiography (CXR) is widely used for TB screening because of its accessibility, yet its limited specificity necessitates confirmatory molecular testing such as polymerase chain [...] Read more.
Tuberculosis (TB) remains a major global health challenge, and early, accurate diagnosis is essential for effective disease control. Chest radiography (CXR) is widely used for TB screening because of its accessibility, yet its limited specificity necessitates confirmatory molecular testing such as polymerase chain reaction (PCR) assays. This study aimed to evaluate the diagnostic performance of a deep learning model (DLM) for TB detection using CXR and to compare its predictive accuracy with PCR results, specifically in a low-burden region. A retrospective dataset of CXR images and corresponding PCR findings was obtained from two hospitals. The DLM, based on the CheXzero vision transformer, was trained on a large imaging dataset and evaluated using receiver operating characteristic (ROC) curves and area under the curve (AUC) metrics. Internal and external validation sets assessed sensitivity, specificity, and predictive values, with subgroup analyses according to imaging modality, demographics, and comorbidities. The model achieved an AUC of 0.915 internally and 0.850 externally, maintaining good sensitivity and specificity, though performance declined when limited to PCR-confirmed cases. Accuracy was lower for older adults and those with chronic kidney disease, chronic obstructive pulmonary disease, or heart failure. These findings suggest AI-assisted CXR screening may support TB detection in resource-limited settings, but PCR confirmation remains essential. Full article
Show Figures

Graphical abstract

17 pages, 741 KB  
Article
Optimization of Case Finding and Preventive Treatment Among Household Contacts of People with Tuberculosis in Zimbabwe
by Tawanda Mapuranga, Collins Timire, Ronald T. Ncube, Sithabiso Dube, Nqobile Mlilo, Cynthia Chiteve, Owen Mugurungi, Fungai Kavenga, Manners Ncube, Nicholas Siziba, Selma Dar Berger, Talent Maphosa, Macarthur Charles, Julia Ershova and Riitta A. Dlodlo
Trop. Med. Infect. Dis. 2025, 10(12), 347; https://doi.org/10.3390/tropicalmed10120347 - 10 Dec 2025
Viewed by 354
Abstract
Systematic screening of household contacts (HHCs) of people with tuberculosis (TB) and starting them on either TB treatment or tuberculosis preventive treatment (TPT) reduces TB incidence. This project supported HHC management in six health facilities in Zimbabwe through the provision of CXR services, [...] Read more.
Systematic screening of household contacts (HHCs) of people with tuberculosis (TB) and starting them on either TB treatment or tuberculosis preventive treatment (TPT) reduces TB incidence. This project supported HHC management in six health facilities in Zimbabwe through the provision of CXR services, reimbursement of transport costs for HHCs, and provision of fuel and refreshments for healthcare workers involved in contact tracing. We describe TB and TPT cascades among the HHCs of index patients with all forms of TB. We enrolled 251 index patients who listed 794 HHCs: 551 (69%) HHCs of 158 index patients were traced and 520 (94%) screened for TB. Of the 502 who were referred to clinics, 362 (72%) reached the clinic. Among 520 HHCs, 324 (62%) underwent CXR screening and 18 (5%) had CXRs suggestive of TB. The yield of TB was 2.3% (12/520), with CXR detecting eight people who had not reported TB symptoms. Of the 311 who were assessed for TPT eligibility, 126 (41%) started TPT and 119 were assessed for TPT outcomes. Of these, 111 (93%) had successful TPT outcomes. The median times to starting TB treatment and TPT were 7 days and 11 days, respectively. The intervention facilitated timely access to healthcare services and a high yield of TB detection. Full article
(This article belongs to the Special Issue New Perspectives in Tuberculosis Prevention and Control)
Show Figures

Figure 1

Back to TopTop