Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,018)

Search Parameters:
Keywords = chest-X-rays

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 761 KB  
Article
Evaluation of the ‘qXR’ Software for the Detection of Pulmonary Nodules, Cardiomegaly and Pleural Effusion: A Comparative Analysis in a Latin American General Hospital
by Adriana Anchía-Alfaro, Sebastián Arguedas-Chacón, Georgia Hanley-Vargas, Sofía Suárez-Sánchez, Luis Andrés Aguilar-Castro, Sergio Daniel Seas-Azofeifa, Kal Che Wong Hsu, Diego Quesada-Loría, María Felicia Montero-Arias, Juliana Salas-Segura and Esteban Zavaleta-Monestel
BioMedInformatics 2026, 6(2), 15; https://doi.org/10.3390/biomedinformatics6020015 - 25 Mar 2026
Abstract
Background/Objectives: AI-based tools for chest radiograph interpretation are increasingly used as decision-support systems, yet their performance must be validated in local clinical environments before deployment. This study evaluated the diagnostic performance of qXR (Qure.ai, v3.2) for detecting pulmonary nodules, cardiomegaly, and pleural effusion [...] Read more.
Background/Objectives: AI-based tools for chest radiograph interpretation are increasingly used as decision-support systems, yet their performance must be validated in local clinical environments before deployment. This study evaluated the diagnostic performance of qXR (Qure.ai, v3.2) for detecting pulmonary nodules, cardiomegaly, and pleural effusion in adult patients at Hospital Clínica Bíblica, San José, Costa Rica. Methods: Three radiologists independently interpreted 225 chest radiographs, providing the reference standard. qXR outputs were compared against radiologist assessments for each finding. The sensitivity, specificity, Cohen’s kappa, and area under the ROC curve (AUC) were calculated. Due to the convenience-stratified sampling design, predictive values were not used for clinical interpretation. Results: For pulmonary nodules, qXR achieved a sensitivity of 0.71, specificity of 0.90, Cohen’s kappa of 0.51, and AUC of 0.80. For pleural effusion, sensitivity and specificity were both 0.86, with a kappa of 0.63 and AUC of 0.86. Cardiomegaly showed the lowest agreement, with a sensitivity of 0.64, specificity of 0.91, kappa of 0.57, and AUC of 0.77. Conclusions: qXR demonstrated moderate diagnostic agreement with radiologist assessments for pulmonary nodules and pleural effusion, and lower agreement for cardiomegaly under local imaging conditions. These results reflect technical concordance between the AI system and individual radiologists and do not constitute evidence of clinical utility or real-world impact. Context-specific validation is essential prior to integrating AI tools into routine radiological workflows. Full article
Show Figures

Figure 1

19 pages, 932 KB  
Article
Stability-Enhanced Pseudo-Multiview Learning via Multiscale Grid Feature Extraction
by Dat Ngo
Mathematics 2026, 14(6), 1085; https://doi.org/10.3390/math14061085 - 23 Mar 2026
Viewed by 119
Abstract
Pseudo-multiview learning improves classification by integrating complementary feature representations, but its performance degrades as the number of psuedo-views increases due to model collapse and ineffective feature scaling. This paper introduces a multiscale grid architecture that extracts structured, scale-adaptive features to stabilize evidence aggregation [...] Read more.
Pseudo-multiview learning improves classification by integrating complementary feature representations, but its performance degrades as the number of psuedo-views increases due to model collapse and ineffective feature scaling. This paper introduces a multiscale grid architecture that extracts structured, scale-adaptive features to stabilize evidence aggregation in pseudo-multiview learning. The proposed design enables efficient handling of difficult classification scenarios by enforcing balanced multiscale representation and reducing redundancy across psuedo-views. Extensive experiments on challenging real-world datasets, including BreakHis (40×, 100×, 200×, 400×), Oxford-IIIT Pet, and Chest X-ray, demonstrate consistent gains in accuracy and stability over the original pseudo-multiview framework and other baseline models. The results confirm that grid-based multiscale feature extraction provides a reliable means to enhance pseudo-multiview learning, particularly in settings where prior methods struggled to generalize. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 3121 KB  
Systematic Review
Comparative Diagnostic Performance of TST and IGRAs in the Diagnosis of Latent Tuberculosis Infection: A Systematic Review and Diagnostic Meta-Analysis
by Shyamkumar Sriram, Tareq Abualfaraj, Manal Ali Alsharif, Marwa Zalat, Saad Madani Alawfi, Hammad Ali Fadlalmola and Muayad Albadrani
Diagnostics 2026, 16(6), 951; https://doi.org/10.3390/diagnostics16060951 - 23 Mar 2026
Viewed by 105
Abstract
Background: Patients with latent tuberculosis infection are mainly asymptomatic, but they still carry a notable risk of developing active TB, particularly when the host becomes immunosuppressed. Hence, appropriate diagnosis and management for LTBI are essential. Tuberculin skin test (TST) and interferon-gamma release assays [...] Read more.
Background: Patients with latent tuberculosis infection are mainly asymptomatic, but they still carry a notable risk of developing active TB, particularly when the host becomes immunosuppressed. Hence, appropriate diagnosis and management for LTBI are essential. Tuberculin skin test (TST) and interferon-gamma release assays (IGRAs) are among the most commonly utilized methods for detecting LTBI. Until now, no agreement has been established regarding the most effective diagnostic test, either TST or IGRA, so our study aims to evaluate the diagnostic utility of TST versus IGRA in detecting LTBI. Methods: An extensive literature search was executed in several databases from inception till June 2024. We included all the available studies that compared TST versus IGRA concurrently applied to the same study participants, utilizing one of the following proxy reference standards: previous contact with a tuberculosis patient, tuberculosis history, chest x-ray suggestive of tuberculosis, or a combination of them. The sensitivity (SN) and specificity (SP) were imputed with their 95% confidence interval (CI). A bivariate random-effects model within the OpenMeta-Analyst software was utilized for data analysis. Results: We included 39 studies, and our primary analysis regarding LTBI revealed that TST has an SN of 0.320 (95% CI [0.254–0.393]) and an SP of 0.808 (95% CI [0.752–0.854]). Nevertheless, the IGRA exhibited a higher SN estimated at 0.362 (95% CI [0.295–0.434]) and a lower SP estimated at 0.758 (95% CI [0.700–0.808]). Regarding the adult population, TST consistently showed a lower SN and a higher SP relative to IGRA. However, within the pediatric population, TST showed higher SN and lower SP when compared to IGRA. Furthermore, TST also showed a lower SN and a higher SP within hemodialysis and organ transplant patients than IGRA. Conclusions: Our diagnostic test meta-analysis revealed that TST was associated with a lower SN and a higher SP than IGRA. Clinicians should interpret these findings with caution, considering the substantial heterogeneity observed across the included studies, the reliance on proxy reference standards, the potential influence of BCG vaccination status, and the considerable overlap in confidence intervals between TST and IGRA estimates across most analyses. Full article
(This article belongs to the Section Diagnostic Microbiology and Infectious Disease)
Show Figures

Figure 1

25 pages, 2531 KB  
Article
FedIHRAS: A Privacy-Preserving Federated Learning Framework for Multi-Institutional Collaborative Radiological Analysis with Integrated Explainability and Automated Clinical Reporting
by André Luiz Marques Serrano, Gabriel Arquelau Pimenta Rodrigues, Guilherme Dantas Bispo, Vinícius Pereira Gonçalves, Geraldo Pereira Rocha Filho, Maria Gabriela Mendonça Peixoto, Rodrigo Bonacin and Rodolfo Ipolito Meneguette
Biomedicines 2026, 14(3), 713; https://doi.org/10.3390/biomedicines14030713 - 19 Mar 2026
Viewed by 276
Abstract
Background/Objectives: Federated learning has emerged as a promising paradigm for enabling collaborative artificial intelligence in healthcare while preserving data privacy. However, most existing frameworks focus on isolated tasks and lack integrated pipelines that combine classification, segmentation, explainability, and automated clinical reporting. Methods: This [...] Read more.
Background/Objectives: Federated learning has emerged as a promising paradigm for enabling collaborative artificial intelligence in healthcare while preserving data privacy. However, most existing frameworks focus on isolated tasks and lack integrated pipelines that combine classification, segmentation, explainability, and automated clinical reporting. Methods: This study proposes FedIHRAS, a privacy-preserving federated learning framework designed for multi-institutional radiological analysis. The system integrates multi-task deep learning modules, including pathology classification using a modified ResNet-50 backbone, anatomical segmentation, explainability through Grad-CAM, and automated report generation supported by semantic aggregation using SNOMED CT. The framework employs confidence-weighted aggregation, differential privacy mechanisms, and secure aggregation protocols to ensure privacy and robustness across heterogeneous institutional datasets. Results: Experimental evaluation was conducted across four large-scale chest X-ray datasets representing simulated institutional nodes, totaling approximately 874,000 images. FedIHRAS achieved high diagnostic performance with strong cross-institutional generalization and demonstrated improved robustness under non-IID data distributions. Additional experiments showed favorable communication efficiency, effective privacy–utility trade-offs, and strong agreement with expert radiologist assessments. Conclusion: The proposed FedIHRAS framework demonstrates that federated learning can support scalable, privacy-preserving, and clinically meaningful radiological AI systems. By integrating multi-task learning, explainability, and automated reporting within a unified federated architecture, the framework addresses key limitations of existing approaches and contributes to the development of collaborative AI in healthcare. Full article
(This article belongs to the Special Issue Imaging Technology for Human Diseases)
Show Figures

Figure 1

13 pages, 1350 KB  
Article
Imaging Pathways in Pediatric Thoracic Trauma: FAST-First Triage and Selective CT Escalation in Clinical Practice
by Emil Radu Iacob, Emil Robert Stoicescu, Valentina Adriana Marcu, Roxana Stoicescu, Vlad Predescu, Narcis Flavius Tepeneu, Maria Corina Stanciulescu, Mihai Cristian Neagu, Adrian Georgescu and Calin Marius Popoiu
Diagnostics 2026, 16(6), 889; https://doi.org/10.3390/diagnostics16060889 - 17 Mar 2026
Viewed by 238
Abstract
Background/Objectives: Pediatric thoracic trauma requires prompt stabilization and timely imaging; however, actual sequencing and escalation triggers are infrequently delineated at the pathway level. The aim of this study was to analyze imaging pathways observed in routine clinical practice at our institution and [...] Read more.
Background/Objectives: Pediatric thoracic trauma requires prompt stabilization and timely imaging; however, actual sequencing and escalation triggers are infrequently delineated at the pathway level. The aim of this study was to analyze imaging pathways observed in routine clinical practice at our institution and to outline a preliminary escalation framework integrating injury mechanism, clinical severity, and initial ultrasound findings. Methods: A retrospective cohort study was conducted at the “Louis Țurcanu” Clinical Emergency Hospital for Children, Timișoara, Romania, including 66 children admitted with primary thoracic trauma between January 2022 and December 2024. Clinical trajectory markers (transfer-in, ICU admission, length of stay) and imaging utilization/sequencing (FAST, CXR, CT, MRI/CTA) were extracted. We divided injuries into two groups: bony (like fractures of the clavicle or scapula) and non-bony. CT escalation was characterized as a chest CT conducted upon admission. Fisher’s exact and Mann–Whitney U tests were used for comparative analyses. Results: FAST was done on all patients but was infrequently positive. Imaging followed heterogeneous but structured patterns, most commonly FAST with CXR, with or without CT. A large group of them had CT scans without first having any X-rays. CT escalation was associated with fracture-pattern injuries and higher-acuity trajectories (transfer-in and ICU admission), as well as prolonged hospital stays. Pathway-level assessment demonstrated that CT escalation effectively captured bony injury patterns, whereas FAST proficiently sorted ICU-level trajectories. Conclusions: Pediatric thoracic trauma imaging functioned as a selective escalation system: FAST served as a universal bedside entry step, and CT operated as an injury pattern- and acuity-linked severity gate. Making this escalation logic clear may help with standardization while still protecting against radiation. Full article
(This article belongs to the Special Issue Recent Developments and Future Trends in Thoracic Imaging)
Show Figures

Figure 1

13 pages, 10127 KB  
Article
Fine-Tuned Segment Anything Model with Low-Rank Adaptation for Chest X-Ray Images
by Saeed S. Alahmari, Michael R. Gardner, Fawaz Alqahtani and Tawfiq Salem
Diagnostics 2026, 16(6), 847; https://doi.org/10.3390/diagnostics16060847 - 12 Mar 2026
Viewed by 289
Abstract
Background: This paper investigates the use of the Segment Anything Model (SAM) for chest X-ray (CXR) image segmentation, with a focus on improving its performance using low-rank adaptation (LoRA). Methods: We evaluate three versions of SAM: two zero-shot methods (using coordinate and bounding [...] Read more.
Background: This paper investigates the use of the Segment Anything Model (SAM) for chest X-ray (CXR) image segmentation, with a focus on improving its performance using low-rank adaptation (LoRA). Methods: We evaluate three versions of SAM: two zero-shot methods (using coordinate and bounding box prompts) and a fine-tuned SAM using LoRA. To support these approaches, we also trained two standard convolutional neural networks (CNNs), U-Net and DeepLabv3+, to generate draft lung segmentations that serve as input prompts for the SAM methods. Our fine-tuning approach uses LoRA to add lightweight trainable adapters within the Transformer blocks of the SAM, allowing only a small subset of parameters to be updated. The rest of the SAM remains frozen, helping preserve its pre-trained knowledge while reducing memory and computational needs. We tested all models on a dataset of CXR images labeled for COVID-19, viral pneumonia, and normal cases. Results: Results show that fine-tuned SAM with LoRA outperforms zero-shot SAM methods and CNN baselines in terms of segmentation accuracy and efficiency. Conclusions: This demonstrates the potential of combining LoRA with SAM for practical and effective medical image segmentation. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Analysis 2026)
Show Figures

Figure 1

23 pages, 5448 KB  
Article
Evidence-Guided Diagnostic Reasoning for Pediatric Chest Radiology Based on Multimodal Large Language Models
by Yuze Zhao, Qing Wang, Yingwen Wang, Ruiwei Zhao, Rui Feng and Xiaobo Zhang
J. Imaging 2026, 12(3), 111; https://doi.org/10.3390/jimaging12030111 - 6 Mar 2026
Viewed by 297
Abstract
Pediatric respiratory diseases are a leading cause of hospital admissions and childhood mortality worldwide, highlighting the critical need for accurate and timely diagnosis to support effective treatment and long-term care. Chest radiography remains the most widely used imaging modality for pediatric pulmonary assessment. [...] Read more.
Pediatric respiratory diseases are a leading cause of hospital admissions and childhood mortality worldwide, highlighting the critical need for accurate and timely diagnosis to support effective treatment and long-term care. Chest radiography remains the most widely used imaging modality for pediatric pulmonary assessment. Consequently, reliable AI-assisted diagnostic methods are essential for alleviating the workload of clinical radiologists. However, most existing deep learning-based approaches are data-driven and formulate diagnosis as a black-box image classification task, resulting in limited interpretability and reduced clinical trustworthiness. To address these challenges, we propose a trustworthy two-stage diagnostic paradigm for pediatric chest X-ray diagnosis that closely aligns with the radiological workflow in clinical practice, in which the diagnosis procedure is constrained by evidence. In the first stage, a vision–language model fine-tuned on pediatric data identifies radiological findings from chest radiographs, producing structured and interpretable diagnostic evidence. In the second stage, a multimodal large language model integrates the radiograph, extracted findings, patient demographic information, and external medical domain knowledge with RAG mechanism to generate the final diagnosis. Experiments conducted on the VinDr-PCXR dataset demonstrate that our method achieves 90.1% diagnostic accuracy, 70.9% F1-score, and 82.5% AUC, representing up to a 13.1% increase in diagnosis accuracy over the state-of-the-art baselines. These results validate the effectiveness of combining multimodal reasoning with explicit medical evidence and domain knowledge, and indicate the strong potential of the proposed approach for trustworthy pediatric radiology diagnosis. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

16 pages, 731 KB  
Systematic Review
Misdiagnosis and Coinfection of Localized Pulmonary Histoplasmosis with Pulmonary Tuberculosis: A Systematic Review of Published Cases
by Sem Samuel Surja, Donnatella Valentina, Anita Devi Krishnan Thantry, Jonathan Christianto Subagya, Edho Yuwono, Darmadi Darmadi, Nisa Fauziah, Robiatul Adawiyah and Retno Wahyuningsih
J. Fungi 2026, 12(3), 190; https://doi.org/10.3390/jof12030190 - 6 Mar 2026
Viewed by 482
Abstract
Pulmonary histoplasmosis is often misdiagnosed as or coinfected with pulmonary tuberculosis (TB). This study aims to analyze the misdiagnosis or co-occurrence of published cases of pulmonary TB and pulmonary histoplasmosis. Cases of histoplasmosis with dissemination were excluded, as it affects other organs. Systematic [...] Read more.
Pulmonary histoplasmosis is often misdiagnosed as or coinfected with pulmonary tuberculosis (TB). This study aims to analyze the misdiagnosis or co-occurrence of published cases of pulmonary TB and pulmonary histoplasmosis. Cases of histoplasmosis with dissemination were excluded, as it affects other organs. Systematic research was conducted using PubMed, EBSCOhost, ProQuest, BioRxiv, and MedRxiv databases. Twenty-seven articles were included, covering a total of 51 cases. Males were predominantly affected, with a median age of 54 years. Exposure to caves and farming occupations were identified as the primary sources of infection (61.9%). The most common clinical symptoms were fever (80%) and cough (82.5%). Laboratory tests revealed culture positivity in 77.1% of cases, with sputum being the most frequently used specimens. In proven pulmonary histoplasmosis, antibody tests were positive in 18 out of 24 cases. Chest X-rays commonly showed cavities, infiltrates, and nodules, with an increase in nodular pattern in recent cases. The number of pulmonary nodules detected was higher on chest computed tomography (CT). Radiologic abnormality could occur in any lung region. This review suggests the potential for misdiagnosis and/or coinfection of pulmonary histoplasmosis and pulmonary TB. The combination of clinical suspicion, radiological findings, antibody and/or antigen testing could improve the diagnosis of pulmonary histoplasmosis. Full article
(This article belongs to the Section Fungal Pathogenesis and Disease Control)
Show Figures

Figure 1

17 pages, 4773 KB  
Article
Optimizing Radiographic Diagnosis Through Signal-Balanced Convolutional Models
by Sakina Juzar Neemuchwala, Raja Hashim Ali, Qamar Abbas, Talha Ali Khan, Ambreen Shahnaz and Iftikhar Ahmed
J. Imaging 2026, 12(3), 108; https://doi.org/10.3390/jimaging12030108 - 4 Mar 2026
Viewed by 197
Abstract
Accurate interpretation of chest radiographs is central to the early diagnosis and management of pulmonary disorders. This study introduces an explainable deep learning framework that integrates biomedical signal fidelity analysis with transfer learning to enhance diagnostic reliability and transparency. Using the publicly available [...] Read more.
Accurate interpretation of chest radiographs is central to the early diagnosis and management of pulmonary disorders. This study introduces an explainable deep learning framework that integrates biomedical signal fidelity analysis with transfer learning to enhance diagnostic reliability and transparency. Using the publicly available COVID-19 Radiography Dataset (21,165 chest X-ray images across four classes: COVID-19, Viral Pneumonia, Lung Opacity, and Normal), three architectures, namely baseline Convolutional Neural Network (CNN), ResNet-50, and EfficientNetB3, were trained and evaluated under varied class-balancing and hyperparameter configurations. Signal preservation was quantitatively verified using the Structural Similarity Index Measure (SSIM = 0.93 ± 0.02), ensuring that preprocessing retained key diagnostic features. Among all models, ResNet-50 achieved the highest classification accuracy (93.7%) and macro-AUC = 0.97 (class-balanced), whereas EfficientNetB3 demonstrated superior generalization with reduced parameter overhead. Gradient-weighted Class Activation Mapping (Grad-CAM) visualizations confirmed anatomically coherent activations aligned with pathological lung regions, substantiating clinical interpretability. The integration of signal fidelity metrics with explainable deep learning presents a reproducible and computationally efficient framework for medical image analysis. These findings highlight the potential of signal-aware transfer learning to support reliable, transparent, and resource-efficient diagnostic decision-making in radiology and other imaging-based medical domains. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

34 pages, 1394 KB  
Systematic Review
A Systematic Review of Cross-Population Shifts in Medical Imaging Analysis with Deep Learning
by Aminu Musa, Rajesh Prasad, Peter Onwualu and Monica Hernandez
Big Data Cogn. Comput. 2026, 10(3), 76; https://doi.org/10.3390/bdcc10030076 - 4 Mar 2026
Viewed by 601
Abstract
Deep learning has achieved expert-level performance in medical imaging analysis. However, models often fail to generalize across patient populations due to cross-population domain shifts, distributional differences arising from demographic variability, variations in imaging protocols, scanner hardware, and differences in disease prevalence. This challenge [...] Read more.
Deep learning has achieved expert-level performance in medical imaging analysis. However, models often fail to generalize across patient populations due to cross-population domain shifts, distributional differences arising from demographic variability, variations in imaging protocols, scanner hardware, and differences in disease prevalence. This challenge limits the real-world deployment and can increase health inequities. This review systematically examines the nature, causes, and impact of cross-population domain shift in deep learning-based medical imaging analysis. We analyzed 50 peer-reviewed studies from 2020 to 2025, evaluating the proposed methodologies for handling population shifts, the datasets employed, and the metrics used to assess performance. Our findings demonstrate that performance degradation ranged from 10–25% when models were tested on unseen populations, emphasizing the substantial impact of domain shifts on model generalizability. The literature reveals that mitigation strategies broadly fall into two categories: data-centric approaches, such as augmentation and harmonization, and model-centric approaches, including domain adaptation, transfer learning, adversarial learning, multi-task learning, and continual learning. While domain adaptation and transfer learning are the most widely used, their performance gains across populations remain modest, ranging from 5–15%, and are not supported by external validation. Our synthesis reveals a significant reliance on large, publicly available datasets from limited regions, with an underrepresentation of data from low- and middle-income countries. Evaluation practices are inconsistent, with few studies employing standardized external test sets. This review provides a structured taxonomy of mitigation techniques, a refined analysis of domain shift characteristics, and an in-depth critique of methodological challenges. We highlight the urgent need for more geographically and demographically inclusive datasets, adaptable modeling techniques, and standardized evaluation protocols to enable accurate and equitable AI-driven diagnostics across diverse populations. Finally, we outline future research directions to guide the development of robust, generalizable, and fair models for medical imaging analysis. Full article
Show Figures

Figure 1

29 pages, 3428 KB  
Article
Scalable Unimodal and Multimodal Deep Learning for Multi-Label Chest Disease Detection: A Comparative Analysis
by Diğdem Orhan, Murat Ucan, Reda Alhajj and Mehmet Kaya
Diagnostics 2026, 16(5), 734; https://doi.org/10.3390/diagnostics16050734 - 1 Mar 2026
Viewed by 319
Abstract
Background/Objectives: Early and accurate diagnosis of chest diseases is a critical challenge in clinical practice, particularly in scenarios where multiple pathologies may coexist. While deep learning-based medical image analysis has shown promising results, most existing studies rely on unimodal data and fixed-scale [...] Read more.
Background/Objectives: Early and accurate diagnosis of chest diseases is a critical challenge in clinical practice, particularly in scenarios where multiple pathologies may coexist. While deep learning-based medical image analysis has shown promising results, most existing studies rely on unimodal data and fixed-scale datasets, limiting their generalizability and clinical relevance. In this study, we present a comprehensive comparative analysis of unimodal and multimodal deep learning models for multi-label chest disease classification using chest X-ray images and associated clinical metadata. Methods: A total of twelve models were developed based on three widely used convolutional neural network architectures—ResNet50, EfficientNetB3, and DenseNet121—under both unimodal (image-only) and multimodal (image + clinical data) configurations. To systematically investigate the impact of data scale, experiments were conducted on two distinct versions: the Random Sample of NIH Chest X-ray Dataset and the NIH Chest X-ray Dataset, containing 5606 and 121,120 samples, respectively. Model performance was evaluated using label-based Area Under the Receiver Operating Characteristic Curve (AUROC) metrics. Results: Experimental results demonstrate that multimodal fusion consistently outperforms unimodal approaches across all architectures and data scales, with more pronounced improvements observed in large-scale settings. Furthermore, increasing data volume leads to improved generalization and reduced performance variance, particularly for rare pathologies. Conclusions: These findings highlight the effectiveness of multimodal, multi-label learning in enhancing diagnostic accuracy and support the development of robust clinical decision support systems for chest disease assessment. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data in Digestive Healthcare)
Show Figures

Figure 1

25 pages, 1678 KB  
Review
Artificial Intelligence for Pulmonary Abnormality Detection in Chest X-Ray Imaging: A Detailed Review of Methods, Datasets and Future Directions
by G. Parra-Cabrera, J. J. Jiménez-Delgado and F. D. Pérez-Cano
Technologies 2026, 14(3), 147; https://doi.org/10.3390/technologies14030147 - 28 Feb 2026
Viewed by 512
Abstract
Chest X-ray (CXR) imaging remains the most widely used radiological modality for assessing pulmonary and cardiothoracic disease, yet its interpretation is inherently constrained by tissue superposition, subtle radiographic findings and marked inter-observer variability. Recent advances in artificial intelligence (AI) have driven significant progress [...] Read more.
Chest X-ray (CXR) imaging remains the most widely used radiological modality for assessing pulmonary and cardiothoracic disease, yet its interpretation is inherently constrained by tissue superposition, subtle radiographic findings and marked inter-observer variability. Recent advances in artificial intelligence (AI) have driven significant progress in automated CXR analysis, supported by large public datasets, evolving annotation strategies and increasingly expressive deep learning architectures. This review presents a comprehensive synthesis of approaches for pulmonary abnormality detection, encompassing convolutional neural networks, transformers, multimodal and vision–language models and self-supervised representation learning. We critically discuss their strengths, limitations and vulnerability to label noise, domain shift and shortcut learning. In parallel, we examine dataset properties, annotation practices, robustness challenges, explainability methods and the heterogeneity of evaluation protocols that hinder fair comparison and clinical translation. Building on these observations, the review identifies key future directions, including foundation models, multimodal integration, federated and domain-generalized training, longitudinal modeling, synthetic data generation and standardized clinical evaluation frameworks. By integrating methodological and clinical perspectives, this work offers an up-to-date reference for researchers and clinicians and outlines a roadmap toward reliable, interpretable and clinically deployable AI systems for chest radiography. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

29 pages, 5858 KB  
Article
MRID: Modeling Radiological Image Differences for Disease Progression Reasoning via Multi-Task Self-Supervision
by Yongtao Hao, Pandong Wang, Yanming Chen and Haifeng Zhao
Electronics 2026, 15(5), 997; https://doi.org/10.3390/electronics15050997 - 27 Feb 2026
Viewed by 237
Abstract
Automated radiology report generation has become a prominent research topic in medical multimodal learning. However, most existing approaches primarily focus on single-image interpretation and rarely address the task of tracking disease progression across longitudinal chest X-rays. This task presents two major challenges: accurately [...] Read more.
Automated radiology report generation has become a prominent research topic in medical multimodal learning. However, most existing approaches primarily focus on single-image interpretation and rarely address the task of tracking disease progression across longitudinal chest X-rays. This task presents two major challenges: accurately localizing pathological changes between temporally paired images, and effectively translating visual difference representations into clinically meaningful textual descriptions. To address these challenges, we propose MRID (Modeling Radiological Image Differences for Disease Progression Reasoning), a multi-task self-supervised framework that follows a pretraining–finetuning paradigm. MRID leverages multiple complementary self-supervised objectives to jointly achieve (1) intra-modal spatial alignment of organs and pathological regions across image pairs, and (2) cross-modal semantic alignment between visual difference representations and radiology report embeddings. Furthermore, we introduce a simple yet effective data augmentation strategy to alleviate the imbalance of disease progression categories. Extensive experiments conducted on the Longitudinal-MIMIC and MS-CXR-T datasets demonstrate that MRID effectively captures fine-grained disease progression patterns. In addition, the proposed framework achieves competitive performance on single-image radiology report generation, further highlighting its strong capability in modeling chest X-ray semantics. Full article
(This article belongs to the Special Issue AI-Driven Medical Image/Video Processing)
Show Figures

Figure 1

28 pages, 2771 KB  
Article
Improving Tree-Based Lung Disease Classification from Chest X-Ray Images Using Deep Feature Representations
by Abdulaziz A. Alsulami, Qasem Abu Al-Haija, Rayed Alakhtar, Huda Alsobhi, Rayan A. Alsemmeari, Badraddin Alturki and Ahmad J. Tayeb
Bioengineering 2026, 13(3), 267; https://doi.org/10.3390/bioengineering13030267 - 25 Feb 2026
Viewed by 420
Abstract
Healthcare systems worldwide face increasing pressure to deliver accurate, affordable, and scalable diagnostic services while maintaining long-term sustainability. Chest X-ray screening is considered one of the most cost-effective methods for detecting lung disease. However, many deep learning approaches are computationally intensive and difficult [...] Read more.
Healthcare systems worldwide face increasing pressure to deliver accurate, affordable, and scalable diagnostic services while maintaining long-term sustainability. Chest X-ray screening is considered one of the most cost-effective methods for detecting lung disease. However, many deep learning approaches are computationally intensive and difficult to interpret, which limits their adoption in high-throughput, resource-constrained clinical settings. This study proposes a hybrid CNN–tree framework for automated lung disease classification from chest X-ray images, which targets COVID-19, pneumonia, tuberculosis, lung cancer, and normal cases. To ensure robustness and generalization, four publicly available chest X-ray datasets from different sources are merged into a unified five-class dataset, which introduces realistic variations in imaging conditions and patient populations. A ResNet-18 model is fine-tuned to extract domain-specific deep feature representations. Feature dimensionality and redundancy are reduced using Principal Component Analysis, while class imbalance is addressed through the Synthetic Minority Over-sampling Technique. The resulting compact feature vectors are used to train interpretable tree-based classifiers, which include Decision Tree, Random Forest, and XGBoost. Experiments conducted using five-fold stratified cross-validation demonstrate substantial and consistent performance gains. When trained on fine-tuned and preprocessed deep features, all evaluated tree-based classifiers achieve weighted F1-scores between 0.977 and 0.982 using five-fold cross-validation, with a significant reduction in inter-class confusion. In addition, the proposed framework maintains low per-sample inference latency, which supports energy-efficient and scalable deployment. These results indicate that combining deep feature learning with interpretable tree-based models provides a practical and reliable solution for sustainable chest X-ray screening in real-world clinical environments. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Figure 1

18 pages, 1221 KB  
Review
Contemporary Review of Clinical Features, Multi-Modality Imaging, and Management of Pericardial Cysts
by Ankit Agrawal, Mohab Elnashar, Keshav Garg, Ahmad Mustafa, Akiva Rosenzveig, Aro Daniela Arockiam, Elio Haroun, Rishabh Khurana, Allan L. Klein and Tom Kai Ming Wang
J. Clin. Med. 2026, 15(4), 1585; https://doi.org/10.3390/jcm15041585 - 18 Feb 2026
Viewed by 605
Abstract
Pericardial cysts (PCs) are rare, benign congenital abnormalities that are encountered as mediastinal lesions. Despite their rarity, they remain clinically important due to their potential to mimic other mediastinal or cardiac pathologies and their capacity, in select cases, to cause significant complications. PCs [...] Read more.
Pericardial cysts (PCs) are rare, benign congenital abnormalities that are encountered as mediastinal lesions. Despite their rarity, they remain clinically important due to their potential to mimic other mediastinal or cardiac pathologies and their capacity, in select cases, to cause significant complications. PCs are typically identified incidentally on imaging studies such as chest x-ray or transthoracic echocardiography, as most patients remain asymptomatic throughout their lives. When symptoms do occur, they are often nonspecific and related to compression of adjacent structures. Serious complications—including infection, rupture, and, rarely, cardiac tamponade—have been reported, underscoring the importance of accurate diagnosis and appropriate follow-up. Definitive characterization of PCs is best achieved using advanced imaging modalities such as cardiac computed tomography or cardiac magnetic resonance imaging, which help differentiate PCs from other mediastinal masses. While many PCs remain stable or even regress spontaneously, intervention may be warranted for symptomatic patients, enlarging cysts, or when the diagnosis remains uncertain. Therapeutic options include percutaneous aspiration, which carries a risk of recurrence, and surgical resection, which offers definitive treatment with excellent outcomes. This review provides a comprehensive overview of the etiology, clinical manifestations, diagnostic evaluation, differential diagnosis, complications, and management strategies for PCs. Full article
Show Figures

Figure 1

Back to TopTop