Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,030)

Search Parameters:
Keywords = ChestX-ray14

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2710 KB  
Article
DPA-HiVQA: Enhancing Structured Radiology Reporting with Dual-Path Cross-Attention
by Ngoc Tuyen Do, Minh Nguyen Quang and Hai Van Pham
Mach. Learn. Knowl. Extr. 2026, 8(5), 113; https://doi.org/10.3390/make8050113 (registering DOI) - 24 Apr 2026
Abstract
Structured radiology reporting can improve clinical decision support by standardizing clinical findings into hierarchical formats. However, thousands of questions in structured report templates about clinical findings are prohibitively time-consuming, which can limit clinical adoption. Furthermore, early medical VQA datasets primarily focused on free-text [...] Read more.
Structured radiology reporting can improve clinical decision support by standardizing clinical findings into hierarchical formats. However, thousands of questions in structured report templates about clinical findings are prohibitively time-consuming, which can limit clinical adoption. Furthermore, early medical VQA datasets primarily focused on free-text and independent question–answer pairs while a recent dataset, Rad-ReStruct, introduced a hierarchical VQA, but the accompanying model still relies heavily on flattened embedding representations and single-path text–image fusion mechanisms that inadequately handle complex hierarchical dependencies in responses. In this paper, we propose DPA-HiVQA (Dual-Path Cross-Attention for Hierarchical VQA), addressing these limitations through two key contributions: (1) multi-scale image embedding representing global semantic embeddings with patch-level spatial features from domain-specific BioViL encoder; (2) dual-path cross-attention mechanism enabling simultaneous holistic semantic understanding and fine-grained spatial reasoning. Evaluated on the Rad-ReStruct benchmark, the model substantially outperforms the established benchmark baseline with an overall F1-score and Level 3 F1-score improvement by 21.2% and 31.9%, respectively. The proposed model demonstrates that dual-path cross-attention architectures can effectively connect holistic semantic understanding and fine-grained spatial detail, paving the way for practical AI-assisted structured reporting systems that reduce radiologist burden while maintaining diagnostic accuracy. Full article
23 pages, 4572 KB  
Article
LLaMA-XR: A Novel Framework for Radiology Report Generation Using LLaMA and QLoRA Fine Tuning
by Md. Zihad Bin Jahangir, Muhammad Ashad Kabir, Sumaiya Akter, Israt Jahan and Minh Chau
Bioengineering 2026, 13(5), 493; https://doi.org/10.3390/bioengineering13050493 - 23 Apr 2026
Abstract
Background: The goal of automated radiology report generation is to help radiologists in their task of creating descriptive reports from chest radiographs. However, the process of creating coherent and contextually accurate reports has been challenging, mainly due to the intricacies of medical language [...] Read more.
Background: The goal of automated radiology report generation is to help radiologists in their task of creating descriptive reports from chest radiographs. However, the process of creating coherent and contextually accurate reports has been challenging, mainly due to the intricacies of medical language and the need to correlate visual data with textual descriptions. Methods: This study presents LLaMA-XR, a novel framework that integrates Meta LLaMA 3.1 Large Language Model with DenseNet-121-based image embeddings and Quantized Low-Rank Adaptation (QLoRA) fine-tuning. Results: The experiment conducted on the IU X-ray dataset demonstrates that LLaMA-XR outperforms a range of state-of-the-art methods. It achieves an ROUGE-L score of 0.433 and a METEOR score of 0.336, establishing new performance benchmarks in the domain. Conclusions: These results underscore LLaMA-XR’s potential as an effective artificial intelligence system for automated radiology reporting, offering enhanced performance. Full article
(This article belongs to the Special Issue AI-Driven Imaging and Analysis for Biomedical Applications)
50 pages, 1737 KB  
Article
Quantum Image Representation with Enhanced Intensity Preservation and Fidelity (IP-QIR)
by Vrushali Nikam, Shirish Sane and Manish Motghare
Quantum Rep. 2026, 8(2), 37; https://doi.org/10.3390/quantum8020037 - 22 Apr 2026
Viewed by 90
Abstract
Quantum image representation (QIR) is the basic idea behind quantum image processing. It explains how a normal image is converted into quantum states so that it can be processed using quantum computers. The commonly used models for QIR are Flexible Representation of Quantum [...] Read more.
Quantum image representation (QIR) is the basic idea behind quantum image processing. It explains how a normal image is converted into quantum states so that it can be processed using quantum computers. The commonly used models for QIR are Flexible Representation of Quantum Images (FRQIs) and Novel Enhanced Quantum Representation (NEQR). Though these approaches highlight the potential of quantum-based image encoding, the limitation of practical applicability on Noisy Intermediate-Scale Quantum (NISQ) devices exists. In this paper, we propose an intensity-preserving quantum image representation (IP-QIR) scheme that aims to maintain accurate grayscale intensity information while significantly reducing quantum resource usage. The proposed method employs a controlled rotation-based encoding strategy, where pixel intensities are embedded into the measurement probability of a single intensity qubit, and spatial information is represented using position qubits. To further enhance feasibility on near-term quantum hardware, the framework operates on small image patches instead of full-resolution images, thereby reducing circuit depth and overall complexity. The performance of the proposed IP-QIR approach is evaluated through IBM Qiskit simulations on three types of grayscale images: synthetic image patches, synthetic aperture radar (SAR) images, and medical tuberculosis (TB) chest X-ray images. Experimental results demonstrate that IP-QIR achieves better intensity preservation than FRQIs and NEQR, with fidelity values reaching up to 84.12% for both SAR and medical datasets. In addition, IP-QIR represents a 4×4 image patch using only five qubits, which significantly reduces the qubit requirement when compared to NEQR, while still preserving high reconstruction accuracy. Full article
19 pages, 378 KB  
Article
Mislabel Detection in Multi-Label Chest X-Rays via Prototype-Weighted Neighborhood Consistency in CoAtNet Embedding Space
by Ariel Gamboa, Mauricio Araya and Camilo Sotomayor
Appl. Sci. 2026, 16(9), 4067; https://doi.org/10.3390/app16094067 - 22 Apr 2026
Viewed by 76
Abstract
Large-scale chest X-ray (CXR) datasets often rely on report-derived or weak labels, introducing missing and incorrect annotations that can degrade downstream models and limit trust. We study training-free mislabel detection in multi-label CXRs by scoring neighborhood label consistency in a fixed embedding space. [...] Read more.
Large-scale chest X-ray (CXR) datasets often rely on report-derived or weak labels, introducing missing and incorrect annotations that can degrade downstream models and limit trust. We study training-free mislabel detection in multi-label CXRs by scoring neighborhood label consistency in a fixed embedding space. Using the NIH Chest X-ray Kaggle sample (5606 CXRs), we extract intermediate CoAtNet features and obtain 64-dimensional embeddings with a frozen CoAtNet backbone and a lightweight refinement head. On top of these embeddings, we compare kNN consistency baselines with distance weighting and label-set similarity against LPV-DW-CS, clustered prototype voting weighted by distance and cluster support. We evaluate three synthetic label-noise regimes with review budgets matched to the corruption rate: random single-label (5% and 20%), boundary-noise (20% corruption within the lowest-density 20% subset), and disjoint-label replacement (20% within that subset). LPV-DW-CS yields the highest downstream macro-AUROC after filtering top-ranked samples (up to 0.8860), while kNN variants achieve higher Recall@budget at the same review rates (up to 99.44%). An image-only expert Likert review of top-ranked real samples finds substantial label-set inconsistencies (54.1% for LPV-DW-CS-280-A; 60.5% for KNN-DW-LSS), supporting neighborhood-consistency ranking as a practical, training-free tool for targeted dataset auditing. Full article
(This article belongs to the Special Issue Computer-Vision-Based Biomedical Image Processing)
Show Figures

Figure 1

16 pages, 8390 KB  
Article
An Adaptive Deep Learning Framework for Multi-Label Chest X-Ray Diagnosis Using a Hybrid CNN–Transformer Architecture and Class-Wise Ensemble Fusion
by Chi-Feng Hsieh, Hsu-Hsia Peng, Yu-Hsiang Tsai, Chia-Ching Chang, Cheng-Hsuan Juan, Hsian-He Hsu and Chun-Jung Juan
Diagnostics 2026, 16(8), 1227; https://doi.org/10.3390/diagnostics16081227 - 20 Apr 2026
Viewed by 208
Abstract
Background/Objectives: To develop and externally evaluate a deep learning framework for multi-label thoracic disease classification on chest radiographs using hybrid convolutional neural network (CNN)–transformer architectures, hierarchical scalar-weighted fusion, and ensemble strategies. Methods: This retrospective, multi-center study utilized publicly available datasets: NIH [...] Read more.
Background/Objectives: To develop and externally evaluate a deep learning framework for multi-label thoracic disease classification on chest radiographs using hybrid convolutional neural network (CNN)–transformer architectures, hierarchical scalar-weighted fusion, and ensemble strategies. Methods: This retrospective, multi-center study utilized publicly available datasets: NIH ChestX-ray14 (112,120 images; 30,805 patients) for model development and internal testing, and CheXpert (223,415 images) plus ChestX-Det10 (3578 images) for external validation. Nine CNN–transformer hybrids were systematically benchmarked, and the proposed model incorporated multi-scale DenseNet121 features, scalar-weighted fusion, positional encodings, and cross-attention. Four post hoc ensemble methods were explored, including a class-wise Top-3 Grid Search. Performance was evaluated using AUROC as the primary metric, along with precision, recall, F1-score, accuracy, specificity, positive predictive value, and negative predictive value. Statistical comparisons were performed using bootstrapped resampling and appropriate parametric or non-parametic tests. Results: On the NIH internal test set, the proposed hybrid model achieved a mean AUROC of 0.8495, which was significantly higher than that of the DenseNet121 baseline (0.8441, p = 0.032). The Top-3 Grid Search ensemble further improved internal performance, achieving a mean AUROC of 0.8577 (p < 0.00001). On external validation, the ensemble consistently outperformed DenseNet121, achieving mean AUROCs of 0.6500 on CheXpert (p < 0.001) and 0.6592 on ChestX-Det10 (p < 0.001). Per-class analysis revealed significant improvements for clinically important conditions such as cardiomegaly, mass, and pneumothorax. Grad-CAM visualizations demonstrated the strong alignment of predicted abnormalities with radiologically relevant regions. Conclusions: This CNN–transformer framework, particularly when combined with class-wise ensemble strategies, provided modest but statistically significant improvements in multi-label chest X-ray classification. External validation suggested partial generalizability across datasets, although performance remained moderate under domain shift. Full article
(This article belongs to the Special Issue Artificial Intelligence in Diagnostic Imaging)
Show Figures

Figure 1

21 pages, 1958 KB  
Article
Adapter-Based Vision Transformer for Cross Domain Few-Shot Classification Using Prototypical Networks
by Sahar Gull and Juntae Kim
Appl. Sci. 2026, 16(8), 3994; https://doi.org/10.3390/app16083994 - 20 Apr 2026
Viewed by 262
Abstract
Cross-domain few-shot learning (CD-FSL) remains challenging in medical imaging, where labeled data are scarce and source–target domain gaps are often large due to modality differences. In particular, existing few-shot learning methods rely on source–target domain similarity, which limits their effectiveness in cross-modality settings [...] Read more.
Cross-domain few-shot learning (CD-FSL) remains challenging in medical imaging, where labeled data are scarce and source–target domain gaps are often large due to modality differences. In particular, existing few-shot learning methods rely on source–target domain similarity, which limits their effectiveness in cross-modality settings such as MRI-to-CT transfer. To address this problem, this paper proposes an adapter-based Vision Transformer framework for cross-domain few-shot brain tumor classification. Lightweight adapter modules are inserted into a pretrained Vision Transformer to enable parameter-efficient domain adaptation without fine-tuning the entire backbone. In addition, a Prototypical Network is employed to construct class prototypes from limited labeled samples, while a prototype-level Maximum Mean Discrepancy (MMD) loss is introduced to align feature distributions across domains. Unlike prior approaches, the proposed framework introduces a unified prototype-level alignment strategy within an episodic learning paradigm, enabling direct class-wise cross-modal alignment. This design improves generalization under large modality gaps and limited labeled data by jointly optimizing representation learning and domain adaptation. The proposed framework is evaluated on MRI-to-CT brain tumor classification as well as several heterogeneous cross-domain benchmarks, including Chest X-ray, ISIC, CropDisease, and EuroSAT. Experimental results demonstrate that the proposed method achieves competitive performance compared to existing few-shot learning baselines, showing strong robustness under significant domain shifts. Full article
(This article belongs to the Special Issue Artificial Intelligence Techniques for Medical Data Analytics)
Show Figures

Figure 1

19 pages, 1466 KB  
Article
D2MNet: Difference-Aware Decoupling and Multi-Prompt Learning for Medical Difference Visual Question Answering
by Lingge Lai, Weihua Ou, Jianping Gou and Zhonghua Liu
J. Imaging 2026, 12(4), 162; https://doi.org/10.3390/jimaging12040162 - 9 Apr 2026
Viewed by 286
Abstract
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases [...] Read more.
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases and providing limited task-specific guidance to pretrained language decoders. To address these limitations, we propose D2MNet (Difference-aware Decoupling and Multi-prompt Network), a framework for medical Diff-VQA that combines change-aware reasoning with prompt-guided answer generation. Specifically, a Change Analysis Module (CAM) predicts whether a change is present and produces a binary change-aware prompt; a Difference-Aware Module (DAM) uses dual attention to capture fine-grained difference features; and a multi-prompt learning mechanism (MLM) injects question-aware, change-aware, and learnable prompts into the language decoder to improve contextual alignment and response generation. Experiments on the MIMIC-DiffVQA benchmark show that D2MNet achieves a CIDEr score of 2.907 ± 0.040, outperforming the strongest baseline, ReAl (2.409), under the same evaluation setting. These results demonstrate the effectiveness of the proposed design on benchmark medical Diff-VQA and suggest its potential for assisting difference-aware medical answer generation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

11 pages, 784 KB  
Article
Chest Radiography Use in Hospitalized Children with Acute Respiratory Tract Infections: A Baseline Analysis for Imaging Optimization
by Roxana Axinte, Sorin Axinte, Elena Tătăranu, Laura Ion, Adina Mihaela Frenți, Florin Filip, Gabriela Burțilă, Liliana Anchidin-Norocel and Smaranda Diaconescu
Children 2026, 13(4), 505; https://doi.org/10.3390/children13040505 - 3 Apr 2026
Viewed by 362
Abstract
Background: Pediatric respiratory infections represent a leading cause of emergency department (ED) visits and hospitalizations. Chest X-rays are frequently used in their diagnostic evaluation, despite guideline recommendations advocating restrictive imaging strategies, particularly in young children with uncomplicated disease. Excessive imaging raises concerns regarding [...] Read more.
Background: Pediatric respiratory infections represent a leading cause of emergency department (ED) visits and hospitalizations. Chest X-rays are frequently used in their diagnostic evaluation, despite guideline recommendations advocating restrictive imaging strategies, particularly in young children with uncomplicated disease. Excessive imaging raises concerns regarding cumulative radiation exposure and inefficient resource utilization. Objectives: To quantify potentially unnecessary chest radiography use in hospitalized pediatric patients with respiratory infections and to identify age-related and diagnostic patterns suitable for targeted imaging optimization interventions. Methods: We conducted a retrospective observational study analyzing pediatric patients presented to the ED of a tertiary county hospital in Romania over a period of 12 months. Data regarding respiratory diagnoses, hospitalization status, patient age, and chest radiography utilization were extracted from electronic medical records. Results: Among more than 26,000 pediatric emergency presentations, 4139 children required hospitalization, of whom 1212 were diagnosed with respiratory infections. A total of 3414 chest radiographs were performed, with the highest imaging burden observed in children aged 0–4 years. Repeated imaging was common in interstitial pneumonia, bronchiolitis, and bronchial hyperreactivity. A strong negative correlation was identified between patient age and imaging frequency (r = −0.70, p < 0.001). Conclusions: Thoracic radiographs are disproportionately used in young children with respiratory infections, particularly in conditions with limited imaging indications. These findings provide an essential baseline for the development of targeted quality improvement interventions aimed at reducing unnecessary pediatric imaging. Full article
(This article belongs to the Special Issue Improving Respiratory Care for Children)
Show Figures

Figure 1

11 pages, 925 KB  
Article
Cardiac Implantable Electronic Device Lead Perforation: A 25-Year Single-Center Experience
by Sameer Al-Maisary, Matthias Karck, Mario Jesus Guzman-Ruvalcaba, Rawa Arif and Gabriele Romano
J. Clin. Med. 2026, 15(7), 2705; https://doi.org/10.3390/jcm15072705 - 2 Apr 2026
Viewed by 338
Abstract
Background: Cardiac implantable electronic device (CIED) lead perforation is a rare but potentially catastrophic complication. As global device implantations increase, understanding the clinical spectrum and optimal management of this complication is essential. This study characterizes the clinical presentation, diagnostic strategies, and outcomes of [...] Read more.
Background: Cardiac implantable electronic device (CIED) lead perforation is a rare but potentially catastrophic complication. As global device implantations increase, understanding the clinical spectrum and optimal management of this complication is essential. This study characterizes the clinical presentation, diagnostic strategies, and outcomes of lead perforation over a 25-year period. Methods: A retrospective analysis was conducted on 32 patients diagnosed with CIED lead perforation between 2000 and 2025 at a high-volume center. Perforations were classified by timing: acute (<24 h), subacute (1–30 days), and chronic (>30 days). Data included demographics, comorbidities, imaging modalities, and procedural interventions. Results: The mean patient age was 76.0 ± 11.7 years, with a mean body mass index (BMI) of 25.5 ± 3.4 kg/m2. Subacute presentation was the most frequent (59.3%, n = 19), followed by acute (28.1%, n = 9) and chronic (12.5%, n = 4) cases. The right ventricle was the primary site of perforation (90.6%). While chest X-rays served as an initial screening tool in 62.5% of cases, diagnosis relied on multimodal imaging, with Computed Tomography (CT) providing definitive confirmation in 31.3% of the cohort, particularly when lead parameters remained stable. Management was risk-stratified based on hemodynamic status. The majority of patients (71.9%, n = 23) underwent successful transvenous lead removal via simple traction. However, 25% (n = 8) presented with hemodynamic instability, and 21.9% (n = 7) suffered from cardiac tamponade. These high-risk cases required surgical intervention, including sternotomy (n = 4), thoracotomy (n = 2), or pericardiotomy (n = 3). Notably, 62.5% of hemodynamically unstable patients were on oral anticoagulants. All patients survived to discharge, with no in-hospital mortality. The median length of hospital stay was 3 days. Conclusions: CIED lead perforation often presents subacutely with subtle clinical signs. CT imaging has emerged as the gold standard for definitive diagnosis. While percutaneous transvenous removal is safe and effective for stable patients, immediate surgical backup is vital, as patients—particularly those on anticoagulation—can deteriorate rapidly. Full article
Show Figures

Figure 1

14 pages, 241 KB  
Article
Patterns of Radiation Therapy During the COVID-19 Pandemic: Results from the Multicenter, Cross-Sectoral Registry of the German National Pandemic Cohort Network (NAPKON)
by Jörg Andreas Müller, Ramsia Geisler, Janne Vehreschild, Shimita Raquib, Katharina Appel, Charlotte Flasshove, Steffi Ulrike Pigorsch, Sina Pütz, Christian Rafael Torres Reyes, Christoph Römmele, Margarete Scherer, Christoph Stellbrink and Daniel Medenwald
Radiation 2026, 6(2), 13; https://doi.org/10.3390/radiation6020013 - 1 Apr 2026
Viewed by 355
Abstract
Background: Cancer patients receiving or having received radiotherapy (RT) represent a clinically vulnerable group during the COVID-19 pandemic. However, systematic data on their clinical course, comorbidities, and vaccination status are limited. The German National Pandemic Cohort Network (NAPKON), established to systematically collect comprehensive [...] Read more.
Background: Cancer patients receiving or having received radiotherapy (RT) represent a clinically vulnerable group during the COVID-19 pandemic. However, systematic data on their clinical course, comorbidities, and vaccination status are limited. The German National Pandemic Cohort Network (NAPKON), established to systematically collect comprehensive clinical data on COVID-19 patients nationwide, provides a unique opportunity to address this gap. This study aimed to describe radiation therapy patterns and COVID-19-related clinical characteristics among patients documented within the NAPKON Cross-Sectoral Platform (SUEP). Methods: This multicenter, descriptive analysis was conducted within the framework of the German National Pandemic Cohort Network (NAPKON). All patients with documented RT and confirmed SARS-CoV-2 infection were identified in the SUEP database. RT was classified relative to the documented infection date as occurring before, during, or after infection. Demographic, clinical, laboratory, imaging, and vaccination data were extracted and analyzed descriptively. Due to the small sample size, no correlation or multivariable analyses were performed. Results: A total of n = 90 patients were included in the analysis. The median age was 65 years (range 22–90), and 56% were male. Most patients (93%) received one course of RT, most frequently targeting specific organ systems (54%), while total body irradiation was performed in 4%. The median radiation dose was 45 Gy (IQR 30–60). Among 68 patients with evaluable timing information, RT had been administered before infection in 53 patients (77.9%), during infection in 3 patients (4.4%), and after infection in 12 patients (17.6%). At the time of SARS-CoV-2 detection, 76% of patients experienced a phase without complications, 19% a phase with complications, and 2% a critical phase. The majority of vaccinated individuals had received Comirnaty (BioNTech/Pfizer; 80%). COVID-19-typical findings were identified in 18% of chest X-rays and 27% of CT scans. Clinical and laboratory characteristics showed no substantial differences by hospital length of stay. Conclusions: Patients with documented RT and SARS-CoV-2 infection in the NAPKON registry predominantly experienced mild or moderate COVID-19 courses and showed a relatively high vaccination uptake. However, due to the descriptive study design and the absence of a control group, these findings should not be interpreted as being attributable to RT itself but rather as a characterization of this registry cohort. Importantly, the cohort mainly comprised patients with a history of RT before SARS-CoV-2 infection, whereas only a small minority received RT during infection. Although the analysis was descriptive and limited by missing data, it demonstrates the feasibility and scientific value of integrating oncologic subcohorts within national pandemic research networks. Continued longitudinal analyses will be essential to further characterize outcomes of patients with cancer and RT in future pandemics. Full article
29 pages, 3941 KB  
Article
Explainable Deep Learning for Thoracic Radiographic Diagnosis: A COVID-19 Case Study Toward Clinically Meaningful Evaluation
by Divine Nicholas-Omoregbe, Olamilekan Shobayo, Obinna Okoyeigbo, Mansi Khurana and Reza Saatchi
Electronics 2026, 15(7), 1443; https://doi.org/10.3390/electronics15071443 - 30 Mar 2026
Viewed by 389
Abstract
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. [...] Read more.
COVID-19 still poses a global public health challenge, exerting pressure on radiology services. Chest X-ray (CXR) imaging is widely used for respiratory assessment due to its accessibility and cost-effectiveness. However, its interpretation is often challenging because of subtle radiographic features and inter-observer variability. Although recent deep learning (DL) approaches have shown strong performance in automated CXR classification, their black-box nature limits interpretability. This study proposes an explainable deep learning framework for COVID-19 detection from chest X-ray images. The framework incorporates anatomically guided preprocessing, including lung-region isolation, contrast-limited adaptive histogram equalization (CLAHE), bone suppression, and feature enhancement. A novel four-channel input representation was constructed by combining lung-isolated soft-tissue images with frequency-domain opacity maps, vessel enhancement maps, and texture-based features. Classification was performed using a modified Xception-based convolutional neural network, while Gradient-weighted Class Activation Mapping (Grad-CAM) was employed to provide visual explanations and enhance interpretability. The framework was evaluated on the publicly available COVID-19 Radiography Database, achieving an accuracy of 95.3%, an AUC of 0.983, and a Matthews Correlation Coefficient of approximately 0.83. Threshold optimisation improved sensitivity, reducing missed COVID-19 cases while maintaining high overall performance. Explainability analysis showed that model attention was primarily focused on clinically relevant lung regions. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

28 pages, 2379 KB  
Article
Decision-Aware Vision Mamba with Context-Guided Slot Mixing for Chest X-Ray Screening and Culture-Based Hierarchical Tuberculosis Classification
by Wangsu Jeon, Hyeonung Jang, Hongchang Lee, Chanho Park, Jiwon Lyu and Seongjun Choi
Sensors 2026, 26(7), 2100; https://doi.org/10.3390/s26072100 - 27 Mar 2026
Viewed by 701
Abstract
Distinguishing Active from Inactive Tuberculosis (TB) on Chest X-rays presents a clinical challenge due to overlapping radiological signs. This study introduces Vision Mamba CGSM, a deep learning framework integrating a State Space Model (SSM) backbone with a Context-Guided Slot Mixing (CGSM) module. The [...] Read more.
Distinguishing Active from Inactive Tuberculosis (TB) on Chest X-rays presents a clinical challenge due to overlapping radiological signs. This study introduces Vision Mamba CGSM, a deep learning framework integrating a State Space Model (SSM) backbone with a Context-Guided Slot Mixing (CGSM) module. The SSM captures global anatomical context, while the CGSM module isolates subtle pathological features by applying localized spatial attention. We validated the model using a hierarchical diagnostic scheme covering Normal, Pneumonia, Active TB, and Inactive TB. Experimental evaluations demonstrate an accuracy of 92.96% and a Youden Index of 79.55% on the independent test set. In the specific binary classification of Active vs. Inactive TB, the model recorded a specificity of 97.04%, outperforming standard baseline architectures including ResNet152 and ViT-B. Additional validations on external datasets confirm the consistent generalization of the proposed feature extraction mechanism. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 761 KB  
Article
Evaluation of the ‘qXR’ Software for the Detection of Pulmonary Nodules, Cardiomegaly and Pleural Effusion: A Comparative Analysis in a Latin American General Hospital
by Adriana Anchía-Alfaro, Sebastián Arguedas-Chacón, Georgia Hanley-Vargas, Sofía Suárez-Sánchez, Luis Andrés Aguilar-Castro, Sergio Daniel Seas-Azofeifa, Kal Che Wong Hsu, Diego Quesada-Loría, María Felicia Montero-Arias, Juliana Salas-Segura and Esteban Zavaleta-Monestel
BioMedInformatics 2026, 6(2), 15; https://doi.org/10.3390/biomedinformatics6020015 - 25 Mar 2026
Viewed by 347
Abstract
Background/Objectives: AI-based tools for chest radiograph interpretation are increasingly used as decision-support systems, yet their performance must be validated in local clinical environments before deployment. This study evaluated the diagnostic performance of qXR (Qure.ai, v3.2) for detecting pulmonary nodules, cardiomegaly, and pleural effusion [...] Read more.
Background/Objectives: AI-based tools for chest radiograph interpretation are increasingly used as decision-support systems, yet their performance must be validated in local clinical environments before deployment. This study evaluated the diagnostic performance of qXR (Qure.ai, v3.2) for detecting pulmonary nodules, cardiomegaly, and pleural effusion in adult patients at Hospital Clínica Bíblica, San José, Costa Rica. Methods: Three radiologists independently interpreted 225 chest radiographs, providing the reference standard. qXR outputs were compared against radiologist assessments for each finding. The sensitivity, specificity, Cohen’s kappa, and area under the ROC curve (AUC) were calculated. Due to the convenience-stratified sampling design, predictive values were not used for clinical interpretation. Results: For pulmonary nodules, qXR achieved a sensitivity of 0.71, specificity of 0.90, Cohen’s kappa of 0.51, and AUC of 0.80. For pleural effusion, sensitivity and specificity were both 0.86, with a kappa of 0.63 and AUC of 0.86. Cardiomegaly showed the lowest agreement, with a sensitivity of 0.64, specificity of 0.91, kappa of 0.57, and AUC of 0.77. Conclusions: qXR demonstrated moderate diagnostic agreement with radiologist assessments for pulmonary nodules and pleural effusion, and lower agreement for cardiomegaly under local imaging conditions. These results reflect technical concordance between the AI system and individual radiologists and do not constitute evidence of clinical utility or real-world impact. Context-specific validation is essential prior to integrating AI tools into routine radiological workflows. Full article
Show Figures

Figure 1

19 pages, 932 KB  
Article
Stability-Enhanced Pseudo-Multiview Learning via Multiscale Grid Feature Extraction
by Dat Ngo
Mathematics 2026, 14(6), 1085; https://doi.org/10.3390/math14061085 - 23 Mar 2026
Viewed by 277
Abstract
Pseudo-multiview learning improves classification by integrating complementary feature representations, but its performance degrades as the number of psuedo-views increases due to model collapse and ineffective feature scaling. This paper introduces a multiscale grid architecture that extracts structured, scale-adaptive features to stabilize evidence aggregation [...] Read more.
Pseudo-multiview learning improves classification by integrating complementary feature representations, but its performance degrades as the number of psuedo-views increases due to model collapse and ineffective feature scaling. This paper introduces a multiscale grid architecture that extracts structured, scale-adaptive features to stabilize evidence aggregation in pseudo-multiview learning. The proposed design enables efficient handling of difficult classification scenarios by enforcing balanced multiscale representation and reducing redundancy across psuedo-views. Extensive experiments on challenging real-world datasets, including BreakHis (40×, 100×, 200×, 400×), Oxford-IIIT Pet, and Chest X-ray, demonstrate consistent gains in accuracy and stability over the original pseudo-multiview framework and other baseline models. The results confirm that grid-based multiscale feature extraction provides a reliable means to enhance pseudo-multiview learning, particularly in settings where prior methods struggled to generalize. Full article
(This article belongs to the Special Issue Machine Learning Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

19 pages, 3121 KB  
Systematic Review
Comparative Diagnostic Performance of TST and IGRAs in the Diagnosis of Latent Tuberculosis Infection: A Systematic Review and Diagnostic Meta-Analysis
by Shyamkumar Sriram, Tareq Abualfaraj, Manal Ali Alsharif, Marwa Zalat, Saad Madani Alawfi, Hammad Ali Fadlalmola and Muayad Albadrani
Diagnostics 2026, 16(6), 951; https://doi.org/10.3390/diagnostics16060951 - 23 Mar 2026
Viewed by 521
Abstract
Background: Patients with latent tuberculosis infection are mainly asymptomatic, but they still carry a notable risk of developing active TB, particularly when the host becomes immunosuppressed. Hence, appropriate diagnosis and management for LTBI are essential. Tuberculin skin test (TST) and interferon-gamma release assays [...] Read more.
Background: Patients with latent tuberculosis infection are mainly asymptomatic, but they still carry a notable risk of developing active TB, particularly when the host becomes immunosuppressed. Hence, appropriate diagnosis and management for LTBI are essential. Tuberculin skin test (TST) and interferon-gamma release assays (IGRAs) are among the most commonly utilized methods for detecting LTBI. Until now, no agreement has been established regarding the most effective diagnostic test, either TST or IGRA, so our study aims to evaluate the diagnostic utility of TST versus IGRA in detecting LTBI. Methods: An extensive literature search was executed in several databases from inception till June 2024. We included all the available studies that compared TST versus IGRA concurrently applied to the same study participants, utilizing one of the following proxy reference standards: previous contact with a tuberculosis patient, tuberculosis history, chest x-ray suggestive of tuberculosis, or a combination of them. The sensitivity (SN) and specificity (SP) were imputed with their 95% confidence interval (CI). A bivariate random-effects model within the OpenMeta-Analyst software was utilized for data analysis. Results: We included 39 studies, and our primary analysis regarding LTBI revealed that TST has an SN of 0.320 (95% CI [0.254–0.393]) and an SP of 0.808 (95% CI [0.752–0.854]). Nevertheless, the IGRA exhibited a higher SN estimated at 0.362 (95% CI [0.295–0.434]) and a lower SP estimated at 0.758 (95% CI [0.700–0.808]). Regarding the adult population, TST consistently showed a lower SN and a higher SP relative to IGRA. However, within the pediatric population, TST showed higher SN and lower SP when compared to IGRA. Furthermore, TST also showed a lower SN and a higher SP within hemodialysis and organ transplant patients than IGRA. Conclusions: Our diagnostic test meta-analysis revealed that TST was associated with a lower SN and a higher SP than IGRA. Clinicians should interpret these findings with caution, considering the substantial heterogeneity observed across the included studies, the reliance on proxy reference standards, the potential influence of BCG vaccination status, and the considerable overlap in confidence intervals between TST and IGRA estimates across most analyses. Full article
(This article belongs to the Section Diagnostic Microbiology and Infectious Disease)
Show Figures

Figure 1

Back to TopTop