You are currently viewing a new version of our website. To view the old version click .
Biomedicines
  • Review
  • Open Access

11 December 2025

Artificial Intelligence Applications in Chronic Obstructive Pulmonary Disease: A Global Scoping Review of Diagnostic, Symptom-Based, and Outcome Prediction Approaches

,
,
,
,
and
1
Department of Design in Engineering, University of Vigo, 36208 Vigo, Spain
2
NeumoVigo I+i Research Group, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36312 Vigo, Spain
3
Department of Computer Engineering, Superior Institute of Engineering of Porto, 4249-015 Porto, Portugal
4
Centro de Investigación Biomédica en Red, CIBERES ISCIII, 28029 Madrid, Spain
This article belongs to the Section Molecular and Translational Medicine

Abstract

Background: Chronic Obstructive Pulmonary Disease (COPD) represents a significant global health burden, characterized by complex diagnostic and management challenges. Artificial Intelligence (AI) presents a powerful opportunity to enhance clinical decision-making and improve patient outcomes by leveraging complex health data. Objectives: This scoping review aims to systematically map the existing literature on AI applications in COPD. The primary objective is to identify, categorize, and summarize research into three key domains: (1) Diagnosis, (2) Clinical Symptoms, and (3) Clinical Outcomes. Methods: A scoping review was conducted following the Arksey and O’Malley framework. A comprehensive search of major scientific databases, including PubMed, Scopus, IEEE Xplore, and Google Scholar, was performed. The Population–Concept–Context (PCC) criteria included patients with COPD (Population), the use of AI (Concept), and applications in healthcare settings (Context). A global search strategy was employed with no geographic restrictions. Studies were included if they were original research articles published in English. The extracted data were charted and classified into the three predefined categories. Results: A total of 120 studies representing global distribution were included. Most datasets originated from Asia (predominantly China and India) and Europe (notably Spain and the UK), followed by North America (USA and Canada). There was a notable scarcity of data from South America and Africa. The findings indicate a strong trend towards the use of deep learning (DL), particularly Convolutional Neural Networks (CNNs) for medical imaging, and tree-based machine learning (ML) models like CatBoost for clinical data. The most common data types were electronic health records, chest CT scans, and audio recordings. While diagnostic applications are well-established and report high accuracy, research into symptom analysis and phenotype identification is an emerging area. Key gaps were identified in the lack of prospective validation and clinical implementation studies. Conclusions: Current evidence shows that AI offers promising applications for COPD diagnosis, outcome prediction, and symptom analysis, but most reported models remain at an early stage of maturity due to methodological limitations and limited external validation. Future research should prioritize rigorous clinical evaluation, the development of explainable and trustworthy AI systems, and the creation of standardized, multi-modal datasets to support reliable and safe translation of these technologies into routine practice.

1. Introduction

According to the most recent Global Burden of Disease study, Chronic Obstructive Pulmonary Disease (COPD) remained the third leading cause of death globally in 2023, following ischemic heart disease and stroke [1]. COPD is a respiratory condition which is characterized by constant symptoms such as dyspnea, sputum production and cough. In advanced stages of the disease, some patients may experience syncope, typically triggered by severe coughing episodes or associated cardiovascular comorbidities such as cor pulmonale [2]. These symptoms result from structural changes in the airways and alveoli that cause chronic airflow obstruction [3], which can lead to increased anxiety and depression, reduced physical activity and impaired sleep, which all result in a symptom burden [4].
Advancements in Artificial Intelligence (AI) provide innovative solutions to longstanding problems in respiratory medicine. By using machine learning (ML) and deep learning (DL), these intelligent systems can analyze complex multimodal data including imaging, spirometry, electronic health records (EHRs), wearable sensors, and patient-reported outcomes [5,6,7,8]. Regarding COPD, applications using AI have rapidly expanded, from automated interpretation of imaging and spirometry for early diagnosis [5,6], to digital health solutions for real-time symptom monitoring [7], and predictive models to predict mortality, hospitalizations and exacerbation prediction [8,9]. These approaches have shown the potential to improve clinical decision-making, personalize care, and reduce healthcare utilization.
Beyond COPD, the utility of AI and Clinical Decision Support Systems (CDSSs) has been substantiated across a spectrum of respiratory and systemic conditions [10], highlighting the transferability of these methodological advances. For instance, novel intelligent systems have been proposed to predict long-term sequelae such as dyspnea in post-COVID-19 patients [11], while unsupervised machine learning tools have proven effective in the clinical characterization of complex presentations like syncope of unclear cause [12] and Alpha-1 antitrypsin deficiency [13,14]. In parallel, advancements in oncology have led to the development of trustworthy, multi-agent AI systems that integrate imaging and tabular clinical data for the early detection of breast cancer [15], while in cardiology, Bayesian-optimized gradient boosting models are being successfully deployed to diagnose coronary heart disease in high-risk diabetic populations [16], and deep learning techniques are being applied to smartwatch-derived electrocardiograms for efficient arrhythmia detection [17]. In neurology and oncology, respectively, multimodal learning models are refining the diagnosis of Alzheimer’s disease by fusing MRI and PET data [18], while scoping reviews validate the potential of AI-supported screening to reduce radiologist workload in breast cancer programs [19]. Moreover, the integration of Large Language Models (LLMs) emerges as a critical tool for next-generation clinical decision support and healthcare administration [20]. In the domain of sleep medicine, AI-driven approaches are increasingly used for the early stratification and diagnosis of Obstructive Sleep Apnea (OSA) using clinical and demographic data, often preceding expensive polysomnography [21,22]. Furthermore, fundamental advancements in CDSS architecture, including the integration of fuzzy expert systems [23,24] and innovative data handling techniques that convert tabular clinical data into image formats for Convolutional Neural Network (CNN) analysis [25], demonstrate the growing sophistication of AI in handling heterogeneous medical data. These developments collectively reinforce the potential of AI to enhance diagnostic precision and prognostic modeling in chronic respiratory diseases.
While individual studies and reviews have explored AI applications in respiratory diseases, there is a lack of comprehensive synthesis focusing specifically on COPD across its diagnostic, symptom-based, and prognostic domains. Unlike systematic reviews, which typically aim to answer precise effectiveness questions through rigorous quality appraisal and meta-analysis, a scoping review is distinct in its objective to chart the volume, nature, and characteristics of research in a broad subject area. A scoping review is therefore timely and appropriate to map the breadth of existing evidence, clarify complex concepts regarding AI modalities, highlight methodological strengths and limitations, and identify research priorities for future clinical integration.
The application of AI in COPD is a field of exceptionally rapid growth, with a high volume of publications necessitating continuous review to map its progress [26]. While recent and valuable systematic reviews have provided critical, deep analyses of specific domains, such as meta-analyses on CT-based diagnosis [27] or long-term prognostic models [28], a comprehensive, high-level map of the entire landscape is crucial for identifying cross-domain trends, methodological gaps, and the overall state of the evidence. Conducting this review is therefore important to synthesize this heterogeneous body of work, ranging from sensor data to genomic analysis, into a coherent framework. This scoping review fulfills that need by providing a global, structured overview of AI applications across three distinct pillars: diagnosis, outcome prediction, and symptom/phenotype analysis. By offering a granular analysis of the specific AI models and data modalities being employed, this work provides a recent and holistic benchmark for researchers, clinicians, and policymakers.

2. Materials and Methods

2.1. Study Design

This scoping review was conducted following the methodological framework proposed by Arksey and O’Malley [29] and subsequently refined by Levac et al. [30], incorporating methodological guidance from the Joanna Briggs Institute (JBI) Manual for Evidence Synthesis [31]. Reporting adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist.
These frameworks were selected for their suitability in systematically mapping heterogeneous and emerging fields of research, such as the application of AI in COPD. The Population–Concept–Context (PCC) framework [31], guided the development of inclusion criteria:
  • Population: Patients diagnosed with COPD.
  • Concept: Application of AI, including but not limited to ML and DL algorithms, for diagnosis, monitoring, symptoms evaluation or prediction of clinical outcomes.
  • Context: Clinical or research setting across any demographic location.
Eligible studies included peer-reviewed journal articles and conference proceedings published in English between 2017 and 2025.
Exclusion criteria were:
  • studies focused primarily on diseases other than COPD without providing COPD-specific results,
  • studies that did not employ AI as a core methodological component.

2.2. Identification and Selection of Studies

A comprehensive literature search was conducted to identify studies investigating AI applications in COPD. Four electronic databases were searched: PubMed, Scopus, IEEE Xplore, and Google Scholar. The search strategy combined COPD-related keywords (“COPD,” “chronic obstructive pulmonary disease”) with AI-related terms (“artificial intelligence”, “machine learning”, “deep learning”). Boolean operators and database-specific subject headings were used where appropriate to ensure both sensitivity and precision.
All retrieved records were imported into the web-based version of Rayyan.ai [32], accessed between June and September of 2025, for deduplication and screening. To ensure a reliable and controlled selection process, three reviewers (A.P., M.C.-G. and A.C.-C), with domain expertise in AI methodologies, and another three reviewers (C.R.-R, M.T.-D, and A.F.-V.), with domain expertise in respiratory medicine, independently screened all titles and abstracts. This independent screening was guided by the predefined inclusion and exclusion criteria detailed in Section 2.1. Studies meeting these eligibility criteria were retrieved for full-text review. Any discrepancies between the reviewers regarding inclusion were resolved through discussion until consensus was reached, ensuring a consistent and rigorous application of the selection criteria.
The final set of publications included in the review consisted of studies that met all predefined inclusion criteria after full-text assessment.

3. Results

3.1. Study Selection

The initial database search identified 707 records. After removing 140 duplicates, 567 unique records were screened by title and abstract. A total of 395 were excluded as non-relevant, leaving 172 articles for full-text evaluation. Following detailed review, 53 studies were excluded, and 120 studies met all inclusion criteria and were incorporated into the final synthesis.
We excluded non-empirical works, such as literature reviews, editorials, and opinion pieces, as well as studies focusing on non-AI technologies or those outside the scope of COPD. Following screening, we excluded 35 articles that were outside the scope of COPD, 18 articles that AI was not a core method in this study.
A summary of the selection process is illustrated in Figure 1 (PRISMA-ScR flow diagram).
Figure 1. PRISMA-ScR flowchart.

3.2. Diagnostic Applications of AI in COPD

A total of 80 focused on AI-based diagnostic applications of COPD, as shown in Table 1. The objectives of these studies included detecting COPD, identifying high-risk populations, staging the disease, and differentiating COPD from other respiratory conditions such as asthma, pneumonia, or adenocarcinoma [33,34,35,36].
A wide array of data modalities was utilized to train and test these models. Common data sources included chest imaging, such as CT scans and X-rays, as well as respiratory sounds, which encompassed many studies using audio data like lung sounds, cough sounds, or breath sounds. Additionally, a significant category was clinical and EHR data, which included EHRs, pulmonary function tests, and general patient records. Finally, studies also used a variety of other data types, such as genetic data, non-invasive sensor data, photoplethysmography (PPG) signals, electrocardiogram (ECG) signals, and exhaled breath samples from E-Nose sensors or other methods [36,37,38,39,40].
Geographically, the choice of data modality often reflects regional healthcare infrastructure. Studies originating from Asia contributed the highest volume, with China predominantly utilizing hospital-based EHRs and CT imaging (e.g., Lin et al. [41], Guan et al. [42], Fang et al. [43]), whereas research from India frequently prioritized accessible diagnostic tools, particularly respiratory sound analysis and chest X-rays (e.g., Raju et al. [44], Sahu et al. [45], Jayadharshini et al. [37]). In contrast, North American and European research was characterized using large-scale longitudinal cohorts, such as COPDGene and DLCST (utilized by González et al. [46], Tang et al. [47], and Cheplygina et al. [48])—to support comprehensive phenotyping.
The implemented AI technologies spanned several categories. DL models were prevalent, including CNNs and their variants, as well as Long Short-Term Memory (LSTM) networks, which were often applied to sequential data like respiratory sounds. Common ML algorithms were also used, such as Support Vector Machines (SVM), Random Forest (RF), and ensemble/boosting models (e.g., XGBoost, CatBoost). Additionally, LLMs like GPT-4 were tested [36,49].
Reported diagnostic performance varied substantially across studies. Some models reported accuracy in the 80-90% range, such as 80.77% for an LS-SVM [50], 86.59% for a Decision Tree (DT) model [39] 88.7% for a Custom CNN [51], and 89.47% for a 1D CNN [52]. Many studies reported performance metrics exceeding 95%. For example, Gökçen [38] achieved 95.28% accuracy with AdaBoost, Melekoglu et al. [53] reported 96.3% accuracy with a hybrid model on PPG signals, and Koysalya et al. [54] reported 97% accuracy with an XGBoost model on genetic data. Several studies reported exceptionally high performance, including accuracies of 99.62% (VGG16 + LSTM) [55], 99.7% (Quadratic Discriminant Classifier) [34], and 99.9% (Xception model) [40]. Two studies, one by El-Magd, et al. [56] using GoogleNet and another by Mahmood et al. [57] using an RF + MobileNetV2 model, reported 100% accuracy in their respective classification tasks.
Performance was documented using a variety of metrics, including accuracy, AUROC curve, F1-score, precision, and recall. Some studies reported performance on distinct validation sets. For instance, Sun et al. [58] reported an AUC of 0.934 on an internal test set and 0.866 on an external validation dataset, while Cheplygina et al. [48] noted varying AUCs (from 0.79 to 0.956) when transferring models across different datasets.

3.3. Outcome Prediction and Prognostic Modeling

A total of 34 studies, shown in Table 2, focused on AI-based prediction of clinical outcomes. The objectives were diverse and included the prediction of exacerbations [59,60,61,62], hospital admission or readmission risk [63], disease progression or severity [64,65], and mortality [46,66,67].
Geographically, outcome prediction research was heavily represented by North America and Europe, likely due to the availability of long-term longitudinal data required for prognostic modeling. Studies from the USA frequently leveraged extensive established cohorts; for instance, González et al. [46] and Young et al. [68] utilized data from the COPDGene study to develop phenotype clustering and progression models. Similarly, European research, particularly from Germany and Spain, utilized multi-center clinical registries. Almeida et al. [65] utilized the German COSYCONET cohort for severity analysis, while Spanish teams, such as Casal-Guisande et al. [69], focused heavily on developing Intelligent Clinical Decision Support Systems (iCDSSs) for mortality prediction using hospital administrative data.
Frequently applied approaches included Artificial Neural Networks (ANNs) [70] including CNNs [46], RF [67], and Gradient-boosting models like GBM [59] and other ensemble models [71] were also represented. Most studies relied on EHRs/clinical variables [59,60,67,71,72], with fewer incorporating medical imaging (CTs) [46,65,66] or respiratory sounds [61]. Collectively, models were developed for both short- and long-term outcomes.
Predictive performance varied substantially. For exacerbation prediction, Wang et al. [60] reported an AUC of 0.90 with an SVM, and Kor et al. [59] achieved an AUC of 0.832 with a GBM. For mortality, Enríquez-Rodríguez et al. [67] reported 99% accuracy using RF, while González et al. [46] achieved an AUROC of 0.72 with CNN. For hospital readmission, Huang et al. [63] reported 77.2% accuracy.
Direct comparisons with conventional indices were rare. One study by Nam et al. [66] evaluated their DL model against the BODE index for survival prediction, finding similar rather than superior results (Time-Dependent AUC 0.87 vs. 0.80). Reporting of key methodological aspects was limited. Nam et al. [66] was one of the few studies to explicitly mention external validation. Interpretability was addressed in just a few cases, such as by Shaikat et al. [72].

3.4. Symptom-Based and Monitoring Applications

Six studies focused on AI-driven symptom monitoring and patient management in COPD, shown in Table 3. Geographically, this domain was characterized by exploratory studies from diverse regions rather than large, centralized cohorts. Research from Japan was prominent, with Yamane et al. [73] utilizing tri-axial accelerometer data to recognize activities causing dyspnea and Hirai et al. [74] applying clustering techniques to patient data. In Europe, Amado-Caballero et al. [75] focused on audio cough analysis, while contributions from China and South Korea (e.g., Peng et al. [76], Pant et al. [75]) leveraged IoT platforms and national survey data, respectively.
These studies primarily explored wearable sensor data, cough and respiratory sound analysis, and mobile health platforms to assess symptom burden, daily activity, and early exacerbation detection. ML algorithms such as RF, GB, and DL algorithms such as RNNs were used to analyze time-series data including oxygen saturation, step counts, and respiratory rate. A few studies incorporated NLPs or voice analysis to identify early signs of deterioration.
Despite encouraging results, most studies were exploratory, conducted on small cohorts, and lacked standardized validation frameworks. Broader, multi-center studies integrating multimodal data could strengthen clinical applicability.
Table 1. Descriptive table of scoping reviews in the diagnosis category.
Table 1. Descriptive table of scoping reviews in the diagnosis category.
AuthorPurposeAI ModelData SourceMain Result
Lin et al. [41]DiagnosisCatBoost, XGBoost, LightGBM, Gradient Boosting Classifier Electronic Health Records and outpatient medical recordsCatBoost was highlighted as the most effective model in terms of accuracy and sensitivity for detecting high-risk populations for COPD.
Heyman et al. [77]Diagnosis (Early detection, differentiation)CatBoost, CareNetDyspnea patientsCareNet model performed better than CatBoost. Sensitivity: 0.919 (CareNet) vs. 0.871 (CatBoost).
Saad et al. [39]DiagnosisDT, SVM, KNN, Naive Bayes, Neural NetworksPulmonary function testsDT provided the best results. Accuracy: 0.8659.
Rivas-Navarrete et al. [52]Diagnosis1D-CNN, SVMCough and breath sounds1D-CNN provided the best results. Accuracy: 0.8947, Precision: 0.80, Recall: 1.00, F1-score: 0.8889.
Zhang et al. [78]DiagnosisGPT-4, Rule-based Classifier, Traditional ML Classifier, ChatGPT, LLaMA3 (8B)Electronic Health RecordsRule-based and Traditional ML Classifiers performed best. F1-score (Rule-based): 0.9600, F1-score (ML): 0.9600, F1-score (GPT-4): ~0.9444.
Maldonado-Franco et al. [79]DiagnosisNeural NetworksPatient recordsAccuracy: 0.929, Sensitivity: 0.882, Specificity: 0.943.
Guan et al. [42]DiagnosisGradient Boosting Decision Tree (GBDT), CNNCT imaging features, lung density parameters, and clinical characteristicsGBDT model (using radiomic, lung density, and clinical data) provided best results. AUC: 0.73, Accuracy: 0.81, Sensitivity: 0.84.
Almeida et al. [80]DiagnosisAnomaly Detection, PCACT scansAnomaly scores improved predictive power. Adjusted R2: 0.56 (from 0.22), Correlation (Emphysema): 0.66, Correlation (Small Airway Disease): 0.61.
Davies et al. [81]DiagnosisCNNSurrogate data, Photoplethysmography (PPG) Data, Real-World COPD DataAUC (Surrogate data): 0.75, AUC (Real-world data): 0.63, Accuracy Range: 0.40 to 0.88, AUC (2 cycles): 0.75.
Zhang et al. [49]DiagnosisCNN, LSTM, CNN-LSTM, CNN-BLSTMAudio dataLSTM provided the best results. Accuracy: 0.9882, F1-score: 0.97.
Albiges et al. [82]Disease classificationRF, SVM, Gaussian Mixture Model (GMM), DTAudio dataRF provided best results. COPD vs. Healthy: Accuracy: 0.80, F1-score: 0.785. COPD vs. Healthy vs. Pneumonia: Accuracy: 0.70, F1-score: 0.597.
Melekoglu et al. [53]DiagnosisSVM, KNN, Ensemble Trees, hybrid modelsPhotoplethysmography (PPG) signalHybrid model (40% features): Sensitivity: 0.942, Accuracy: 0.963. Hybrid model (45% features): AUC: 0.952.
Vollmer et al. [51]Diagnosis (Case vs. control)LR, Random Forest Classifier, SGD Classifier, KNN, Decision Tree Classifier, GaussianNB, SVM, Custom CNN, MLPPatient dataCustom CNN provided the best results. Accuracy: 0.887, AUC: 0.953.
Bracht et al. [83]Disease differentiationRandom Forest, SVM, Linear Discriminant AnalysisMass spectrometry analysis of plasma samplesRF presented best results. AUC (Adenocarcinoma vs. COPD): 0.935, AUC (COPD w/ Adenocarcinoma vs. COPD): 0.916.
Joumaa et al. [33]Disease differentiationMultinomial Regression, Gradient Boosting, RNNPatient data from medico-administrative databasesBoosting model results: Recall (Asthma): 0.83, Recall (COPD): 0.64, Precision (Asthma): 0.71, Precision (COPD): 0.66.
Zafari et al. [84]Diagnosis (Case vs. control)Multilayer Neural Networks (MLNN), XGBoostElectronic Health RecordsXGBoost provided the best results. Overall Accuracy: 0.86, AUC (Structured data): 0.919, AUC (Text data): 0.882, AUC (Mixed data): 0.932.
Zheng et al. [50]Diagnosis (Case vs. control)LS-SVM (linear and polynomial kernels)Patient dataBoth kernels provided optimal results. Linear kernel: Accuracy: 0.8077, AUC: 0.87. Polynomial kernel: Accuracy: 0.8462, AUC: 0.90.
Tang et al. [47]DiagnosisResNetChest CTAUC: 0.86, PPV: 0.847, NPV: 0.755.
González et al. [46]Diagnosis, staging, and prediction (ARD, mortality)CNNChest Computed Tomography from COPDGene participantsAUC (Mortality): 0.72, AUC (Diagnosis): 0.856, AUC (Exacerbation): 0.64.
El-Magd et al. [56]Diagnosis (Early detection)GoogleNetSensor data and patient dataModel achieved perfect classification. Accuracy: 1.00, Precision: 1.00, Recall: 1.00, F1-score: 1.00.
Mahmood et al. [57]DiagnosisRandom Forest, MobileNetV2Audio dataAccuracy: 1.00, Sensitivity: 1.00, Precision: 1.00, F1-score: 1.00.
Choi et al. [85]DiagnosisModified VGGish, LACM, Grad-CAMRespiratory soundsAccuracy: 0.9256, Precision: 0.9281, Sensitivity: 0.9222, Specificity: 0.9850, F1-score: 0.9229, Balanced Accuracy: 0.954.
Brunese et al. [86]Diagnosisk-Nearest Neighbors (kNN), SVM, Neural Networks, Logistic RegressionRespiratory soundsNeural network provided the best results. F1-score: 0.960, Sensitivity: 0.95, Specificity: 0.970.
McDowell et al. [87]DiagnosisGeneralized Linear Model (GLM), Gradient Boosting Model, ANN, Ensemble Method (ANN/GBM)Patient serum samplesThe ensemble model (ANN/GBM) provided the best results. Mean AUC: 0.93.
Yahyaoui et al. [88]DiagnosisSVM, ASVM (Adaptive SVM)Patient dataAdaptive SVM provided better results than SVM. Accuracy (ASVM): 0.9263, Accuracy (SVM): 0.9059.
Saleh et al. [89]DiagnosisDT, NB, Bayesian Networks, Wrapper Methods, Discretization AlgorithmsPatient dataBayesian network with the TAN algorithm performed best. AUC: 0.815.
Rukumani Khandhan et al. [90]DiagnosisInception, ResNet, VGGNetRespiratory soundsInceptionNet performed the best. F1-Score: 0.99, Precision: 1.00, Recall: 0.98, Accuracy: 0.99.
Gökçen [38]DiagnosisSVM, AdaBoost, RF, J48 Decision TreeLung soundsAdaBoost achieved the highest performance. Accuracy: 0.9528, Sensitivity: 0.9032, Specificity: 0.9987.
Hung et al. [91]DiagnosisCNNCough soundsAccuracy: 0.91.
Chawla et al. [92]DiagnosisSVM, ResNet50Chest X-RaySVM + ResNet50 model results: Accuracy: 0.93, Precision: 0.94, Recall: 0.928, F1-score: 0.933.
Kousalya et al. [54]DiagnosisSVM, LR, DT, KNN, MLP, XGBoostPatient data including genetic dataXGBoost provided the best results. Accuracy: 0.97, Mean AUC: 0.94, Sensitivity: 0.99.
Wu et al. [36]DiagnosisSVM, Random Forest, Deep Neural NetworksCT scansRF provided the best results. Sensitivity: 0.925, Specificity: 0.902, Accuracy: 0.9149.
Archana et al. [55]Diagnosis and disease differentiationVGG16 Model, LSTMLung soundsOverall Accuracy: 0.9962.
Zhu et al. [93]Disease classificationBidirectional Gated Recurrent Units (BiGRU), CNNRespiratory soundsCOPD model performance: Precision: 1.00, Recall: 0.87, F1-score: 0.93. Overall Accuracy: 0.919.
Jayadharshini et al. [37]Diagnosis and severity assessmentInceptionV3, VGG16, ResNet, DenseNet, XGBoostChest X-raysSeverity (XGBoost): Precision: 0.95, Recall: 0.92, F1-score: 0.96. Diagnosis (InceptionV3): Accuracy: 0.9285.
Raju et al. [44]Disease classificationCNNRespiratory soundsCOPD Diagnosis results: AUC: 0.98, F1-score: 0.90, Recall: 0.89, Precision: 0.95. Overall Accuracy: 0.93.
Türkçetin et al. [94]DiagnosisDenseNet201, VGG16, CNNCT scansAccuracy: 0.99, Recall: 0.98, Precision: 1.00, F1-Score: 0.99.
Wang et al. [95]DiagnosisTransfer learningPatient data and Electronic Health RecordsAUC: 0.952, Accuracy: 0.905, F1-score: 0.887.
Choudhary et al. [96]Diagnosis and disease differentiationEnsemble learning (CNN, XGBoost, RF, SVM, LR)X-ray imagesEnsemble model performed best. Accuracy: 0.948, Sensitivity: 0.936, Specificity: 0.959, AUC: 0.97. (Outperformed standalone CNN Accuracy: 0.902).
Ooko et al. [97]DiagnosisTinyML, Synthetic Data modelSynthetic data generated from exhaled breath samplesTinyML model provided the best results. Accuracy: 0.9778.
Rohit et al. [98]Diagnosis and disease differentiationBiLSTMRespiratory soundsAccuracy: 0.96, F1-score: 0.96.
Islam et al. [99]Diagnosis and disease differentiationLR, Random Forest, GB, SVM, Naive Bayes, ANN, CNN, 1D-CNN, LSTMRespiratory sounds + clinical dataRespiratory dataset (ANN): Accuracy: 0.485, Precision: 0.50, F1-score: 0.465, Recall: 0.485. ICHBI dataset (1D-CNN): Accuracy: 0.92, Precision: 0.89, Recall: 0.91, F1-score: 0.93.
Ikechukwu et al. [100]DiagnosisResNet50, Xception, Transfer LearningChest X-RaysHighest performance in Lung Nodule detection. Accuracy: 0.930, Precision: 0.97, Recall: 0.965, F1-score: 0.967.
Jenefa et al. [101]DiagnosisCNN-LSTMLung function measurements, clinical history, and image dataAccuracy: 0.963, Precision: 0.948, Recall: 0.972, F1-score: 0.959.
Moran et al. [40]DiagnosisXception, VGG-19, InceptionResNetV2, DenseNet-121ECG SignalsXception Model provided the best results. Accuracy: 0.999, Sensitivity: 0.996.
Anupama et al. [102]DiagnosisCNNLung soundsAccuracy: 0.833.
Sahu et al. [45]Diagnosis and disease differentiation1D-CNN (Adam and RMSprop optimizers)Respiratory sounds1D-CNN with Adam optimizer performed best. Accuracy: 0.94, Precision: 0.90, Recall: 0.86, F1-score: 0.88.
Ooko et al. [103]DiagnosisTinyML (NN, K-means)Exhaled breath dataValidation Accuracy: 0.953.
Sanjana et al. [104]DiagnosisConvolutional Recurrent Neural Networks (CRNN)Lung soundsCRNN-BiLSTM provided the best results. Accuracy: 0.98601, F1-score: 0.99, Recall: 0.98.
Jha et al. [105]Diagnosis (Early detection)1D-CNN (Adam and RMSprop optimizers)Respiratory sounds1D-CNN with Adam optimizer performed best. Accuracy: 0.94, Precision: 0.90, Recall: 0.86, F1-score: 0.88.
Mridha et al. [106]DiagnosisCNNRespiratory soundsAccuracy: 0.95, AUC: 1.00.
Ikechukwu et al. [107]DiagnosisResNet50Chest X-RayPneumothorax case provided best results. Accuracy: 0.986, Precision: 0.994, Recall: 0.986, F1-score: 0.973.
T. Ha et al. [108]DiagnosisRandom Forest, CNNRespiratory soundsAccuracy: 0.9604, Recall: 0.9847, Precision: 0.9905, F1-Score: 0.9775, Specificity: 0.9846.
Dhar [109]DiagnosisXGBoost, Extra Trees, Random Forest, GB, LR, SVC, KNN, NuSVCDielectric and demographic dataEnsemble learning model results: Accuracy: 0.9820, Precision: 0.98, Recall: 0.96, F1-score: 0.9667, AUC: 0.9912.
Khade [110]Diagnosis and disease differentiationDeep CNNBreathing patterns and chest X-ray picturesAccuracy: 0.98, Precision: 0.99, Recall: 0.98, F1-score: 0.98.
Fang et al. [43]DiagnosisDSA-SVMElectronic Health RecordsAccuracy: 0.951, Recall: 0.9793, F1-score: 0.9771.
Li et al. [111]DiagnosisCNN, Fuzzy decision treesRespiratory soundsCNN provided high classification accuracy. Fuzzy decision tree provided interpretable predictions. Confidence Level: 0.84.
Bulucu et al. [112]DiagnosisRecurrent Trend Predictive Neural NetworkE-Nose sensor dataOverall Accuracy: 0.97, Recall: 0.9896, Specificity: 0.9455, F1-score: 0.9726, MCC: 0.9416.
Aulia et al. [113] DiagnosisGraph Convolutional Network, PCAExhaled breath dataAccuracy: 0.975, Precision: 0.972, Recall: 0.974, F1-score: 0.975.
Amudala Puchakayala et al. [114]DiagnosisCatBoostCT scansStandard-Dose CT: AUC: 0.90, PPV: 0.83, NPV: 0.83. Low-Dose CT: AUC: 0.87, PPV: 0.79, NPV: 0.80. Combined CT + Clinical: AUC: 0.88, PPV: 0.79, NPV: 0.80.
Zhang et al. [115]DiagnosisLASSO regression model, SVM-RFEGene expression dataSLC27A3 and STAU1 achieved highest AUCs. AUC (SLC27A3): 0.900, AUC (STAU1): 0.971.
Sun et al. [58] DiagnosisResNet18Chest CT scans and clinical dataAUC (Internal test set): 0.934, AUC (External validation): 0.866.
Zhang et al. [116] DiagnosisBagged DTRespiratory signalsAccuracy: 0.933.
Wu et al. [117]DiagnosisRandom Forest, DT, KNN, Linear Discriminant Analysis, AdaBoost, DNNWearable device dataDNN performed the best. Accuracy: 0.914, F2-score: 0.914, AUC: 0.9886, Sensitivity: 0.877, Specificity: 0.955, Precision: 0.955.
Srivastava et al. [118]DiagnosisCNNRespiratory soundsMFCC data (post-augmentation): Sensitivity: 0.92, Specificity: 0.92, ICBHI Score: 0.92. Mel-Spectrogram data: Sensitivity: 0.73, Specificity: 0.91, ICBHI Score: 0.82.
Zakaria et al. [119]Diagnosis (differentiation, case vs. control)ResNet50, ResNet101, ResNet152Respiratory soundsResNet50 provided best accuracy/time trade-off. Accuracy: 0.9037.
Bodduluri et al. [120]Diagnosis and classificationFully Convolutional Network (FCN), Random Forest Classifier (RFC)Spirometry dataBoth models were promising. FCN: AUC: 0.80, F1-score: 0.79. RFC: AUC: 0.90, F1-score: 0.76.
Ma et al. [121]DiagnosisLR, KNN, SVM, DT, MLP, XGboostClinical and genetic dataXGBoost provided the best results. AUC: 0.94, Accuracy: 0.91, Precision: 0.94, Sensitivity: 0.94, F1-score: 0.94, MCC: 0.77, Specificity: 0.84.
Naqvi et al. [34]Diagnosis and disease differentiationSVM, Quadratic Discriminant Classifier, KNN, RF, Rule-based SystemsLung soundsQuadratic Discriminant Classifier performed best. Accuracy: 0.997, TPR (Recall): >0.99.
Basu et al. [122]Diagnosis and disease differentiationDeep Neural NetworkRespiratory soundsOverall Accuracy: 0.9567, Precision: 0.9589, Recall: 0.9565, F1-score: 0.9566. COPD-specific: Precision: 1.0, Recall: 0.91, F1-score: 0.95.
Altan et al. [123]Diagnosis (Early detection)Deep Belief NetworkLung soundsAccuracy: 0.9367, Sensitivity: 0.91, Specificity: 0.9633.
Spathis et al. [35]Diagnosis and disease differentiationRandom Forest, NB, LR, NN, SVM, KNN, DTPatient dataRF model provided the best results. Precision: 0.977.
Xu et al. [124]Diagnosis (Symptom detection)ANNElectronic Health RecordsAccuracy: 0.8645, F1-score: 0.8293.
Haider et al. [125]Diagnosis (Case vs. control)SVM, KNN, LR, DT, Discriminant AnalysisRespiratory soundsLR and SVM (linear and quadratic) performed best. Accuracy: 1.00, Sensitivity: 1.00, Specificity: 1.00, AUC: 1.00.
Gupta et al. [126] Diagnosis and disease differentiationKNN, SVM (Linear), Random Forest, Decision TreeChest CT scansIGWA with KNN classifier achieved highest accuracy. Accuracy: 0.994.
Badnjevic et al. [127]Diagnosis and disease differentiationANN, Fuzzy LogicPatient data and clinical dataSensitivity: 0.9622, Specificity: 0.9871.
Windmon et al. [128]Diagnosis and disease differentiationRandom ForestCough recordingsLvl 1 (Disease vs. Control): AUC: 0.83, Accuracy: 0.8067, Sensitivity: 0.80, Specificity: 0.82. Lvl 2 (COPD vs. CHF): AUC: 0.80, Accuracy: 0.7805, Sensitivity: 0.82, Specificity: 0.75.
Pizzini et al. [129]Diagnosis (Case vs. control)Random ForestBreath samplesAUC: 0.97, Sensitivity: 0.78, Specificity: 0.91, PPV: 0.86, NPV: 0.86.
Cheng et al. [130]DiagnosisSPADEClinical dataModel demonstrated high sensitivity and specificity. (No numerical values provided).
Cheplygina et al. [48]Diagnosis (Case vs. control)Transfer learningCT scansBest performance on Frederikshavn dataset. AUC: 0.938–0.953. AUC (COPDGene2): 0.956, AUC (COPDGene1): 0.917, AUC (DLCST): 0.79.
Table 2. Descriptive table of scoping reviews in the outcome category.
Table 2. Descriptive table of scoping reviews in the outcome category.
AuthorPurpose AI ModelData Source Main Result
Almeida et al. [65]SeveritySelf-supervised DL anomaly detectionPaired inspiratory/expiratory CT and clinical data (COPDGene, COSYCONET)AUC (COPDGene): 0.843, AUC (COSYCONET): 0.763.
Wang et al. [71]Risk PredictionCatBoost, NGBoost, XGBoost, LightGBM, RF, SVM, LRClinical dataCatBoost model performed best. AUC: 0.727, F1-score: 0.425, Accuracy: 0.736.
Dogu et al. [70]Length of hospital staySBFCM, ANNClinical findings, socio-demographic information, comorbidities, medical recordsAccuracy: 0.7995 (outperformed other models).
González et al. [46]Detect/stage COPD, predict ARD events & mortalityCNNChest CT (COPDGene)AUROC (Mortality): 0.72, AUROC (Diagnosis): 0.856, AUROC (Exacerbation): 0.64.
Huang et al. [63]Hospital readmissionNLP, DTPatient discharge reportsAccuracy: 0.772 (in terms of readmission risk).
Zheng et al. [64]SeverityHFL-COPRASPatient dataSensitivity analysis showed rankings can vary but the method remains robust.
Baechle et al. [131]Hospital readmissionNB, RF, SVM, KNN, C4.5, Bagging, BoostingPatient discharge reportsRF achieved highest AUC: 0.657. Naïve Bayes had lowest mean misclassification cost.
Wang et al. [132]TreatmentAssociation Rules, Cluster Analysis, Complex Network AnalysisPrescription dataIdentified key traditional Chinese medicines and associations for holistic treatment.
Jayadharshini et al. [37]Severity, DiagnosisInceptionV3, VGG16, ResNet, DenseNet, XGBoostChest X-raysSeverity (XGBoost): Precision: 0.95, Recall: 0.92, F1-score: 0.96. Diagnosis (InceptionV3): Accuracy: 0.9285.
Shaikat et al. [72]Severity, Quality of lifeXGBoost, RF, XAIPatient dataSeverity (XGBoost): Accuracy: 0.9955. Quality of Life (RF): MSE: 94.95, MAE: 7.06.
Nam et al. [66]SurvivalCNNPost-bronchodilator spirometry and chest radiographyTD AUC (DLSP CXR): 0.73, TD AUC (DLSP integ): 0.87. (Outperformed FEV1).
Hasenstab et al. [133]Mortality/SeverityCNNCT imagesAUC (%EM): >0.82 (GOLD 1-3), AUC (%EM): >0.92 (GOLD 4).
Hussain et al. [134]SeverityRF, SVM, GBM, XGBoost, KNN, SVEPatient dataSVE performed best. Accuracy: 0.9108, Precision: 0.9077, Recall: 0.9136, F-measure: 0.9107, AUC: 0.9687.
Peng et al. [135]SeverityDTRadiology reportsSensitivity: 0.869. Model with %LAV-950 and AWT3-8 was superior to %LAV-950 alone (AUC: 0.92 vs. 0.79).
Altan et al. [136]SeverityDELMLung soundsCOPD0: Accuracy: 0.9333. COPD1: Accuracy: 0.9003. COPD2: Accuracy: 0.9523. COPD3: Accuracy: 0.8571. COPD4: Accuracy: 0.9902.
Young et al. [68]ProgressionClusteringCOPDGeneIdentified two distinct COPD subtypes: Tissue → Airway (70%) and Airway → Tissue (30%).
Goto et al. [137]Hospital readmissionLR, Lasso, DNNPatient dataSensitivity (LR): 0.75 vs. Sensitivity (DNN): 0.67. Specificity (Lasso): 0.51 vs. Specificity (LR): 0.37.
Orchard et al. [138]Hospital admission riskMT-NN, SVM, RFTrial dataMulti-task neural nets performed best for 24-hour admission prediction. AUC: 0.74.
Swaminathan et al. [139]TriageSVM, RF, NB, LR, KNN, GBRF, ETPatient dataLogistic Regression and Gradient Boosted Random Forest provided the best accuracy.
Casal-Guisande et al. [69]MortalitySqueezeNetPatient dataAUROC: 0.85.
Casal-Guisande et al. [140]Exacerbation characterizationk-prototypes, RFPatient dataIdentified four unique clusters. AUROC: 0.91.
López-Canay et al. [141]Hospital readmissionRF, NB, MLP, Fuzzy LogicPatient dataAUC: ~0.80, Sensitivity: 0.67, Specificity: 0.75.
Jeon et al. [142]Severity1D-Transformer, MLP, LRClinical data with spirometry imagesTransformer + MLP outperformed LR. AUROC (Mod-Sev): 0.755 vs. 0.730. AUROC (Severe): 0.713 vs. 0.675.
Pegoraro et al. [143]Exacerbation predictionHMMRemote monitoring device dataHMM improved detection of pre-exacerbation periods. Sensitivity: up to 0.768.
Atzeni et al. [144]Exacerbation predictionk-means, LR, RF, XGBoostAir quality, health records, lifestyle infoRF performed best. Cluster 1: AUC: 0.90, AUPRC: 0.70. Cluster 2: AUC: 0.82, AUPRC: 0.56.
Wu et al. [145]Exacerbation predictionRF, DT, LDA, AdaBoost, DNNLifestyle, environmental, clinical, wearable, interview dataRF model performed best. Accuracy: 0.914, Precision: 0.686, F1-score: 0.680.
Bhowmik et al. [146]Exacerbation predictionSTAIN, CNN, RNN, CRNNAudio filesSTAIN model performed best. Accuracy: 0.9340, Sensitivity: 0.9270, Specificity: 0.9420, MCC: 0.8691.
Vishalatchi et al. [147]Exacerbation predictionNLP, RF, LR, NB, SVM, DT, DNNElectronic Health Records, Clinical NotesRF model performed best. Accuracy: 0.80, F1-Score: 0.73.
Kor et al. [59]Exacerbation predictionSVM, RF, GBM, XGBClinical DataGBM model performed best. AUC: 0.832, Sensitivity: 0.7941, Specificity: 0.7794, PPV: 0.6429.
Wamg et al. [60]Exacerbation predictionRF, SVM, LR, KNN, NBElectronic Health RecordsSVM model performed best. AUROC: 0.90, Sensitivity: 0.80, Specificity: 0.83, PPV: 0.81, NPV: 0.85.
Fernandez-Granero et al. [61]Exacerbation predictionRandom ForestRespiratory soundsAccuracy: 0.878, Sensitivity: 0.781, Specificity: 0.959, PPV: 0.941, NPV: 0.839, F1-score: 0.80, MCC: 0.80.
Shah et al. [62]Exacerbation predictionLR, SVM, DT, KNNVital signs, symptoms, medication dataLR (using vital signs) performed best. Mean AUC: 0.682.
Enríquez-Rodríguez et al. [67]MortalityRF, PLS, KNNClinical data, biological samples, follow-up data, comorbidityRF performed best. Accuracy: 0.99.
Pinheira et al. [148]Length of hospital stayCNNPatient DataAUC (6-day threshold): 0.77, AUC (10-day threshold): 0.75.
Table 3. Descriptive table of scoping reviews in the symptoms category.
Table 3. Descriptive table of scoping reviews in the symptoms category.
AuthorsPurpose AI Model Data Source Main Result
Pant et al. [75]Predict smoking statusRF, DT, Gaussian Naive Bayes, KNN, AdaBoost, MLP, TabNet, ResNNDemographic, behavioral, and clinical dataResNN outperformed other models. (Metrics: AUROC, Sensitivity, Specificity, F1-score).
Amado-Caballero et al. [149]Analyze cough patternsCNNAudio dataDistinctions in cough patterns were observed between COPD and other respiratory pathologies.
Yamane et al. [73]Recognize activities causing dyspneaRFTri-axial accelerometerThe wrist + hip classifier successfully recognized most daily activities that caused shortness of breath.
Weikert et al. [150]Analyze/quantify airway wall thickness3D U-NetChest CTAirway centerline detection: Sensitivity: 0.869. Airway wall segmentation: Dice score: 0.86.
Peng et al. [76]Remote monitoringLLMSensor dataAverage model Accuracy: 0.74. The LLM generated short, interpretable rules despite data variations.
Hirai et al. [74] Differentiate ACOS from asthma/COPDk-meansPatient dataIdentified 4 biological clusters, including a distinct cluster for Asthma-COPD Overlap (ACOS) patients.

4. Discussion

4.1. Summary of Findings by Category

Our review mapped a substantial body of literature, confirming that AI in COPD is an exceptionally active field with a high volume of publications requiring continuous review [26]. A key contribution of this scoping review is the provision of a global, structured overview of this broad landscape, whereas other recent valuable reviews have necessarily focused on specific domains, such as meta-analyses of CT-based diagnosis [27] or prognostic models [28]. Our findings reveal a clear concentration of research in three primary domains, which provides a holistic framework: (1) diagnostic applications, (2) outcome prediction and prognostic modeling, and (3) symptoms and phenotype analysis. The state of the evidence within each of these categories will be summarized in the sections that follow.

4.1.1. AI in COPD Diagnosis

The most prominent application of AI identified in this review was for the diagnosis and detection of COPD. This area is characterized by a wide variety of advanced models and data sources, often reporting high-performance metrics. ML models, particularly tree-based ensembles, have proven effective when applied to large clinical datasets. For instance, Lin et al. [41] demonstrated that the CatBoost model was highly effective for identifying high-risk populations from EHRs.
DL algorithms, more specifically CNNs, have been extensively used for analyzing medical imagery. Studies conducted by Guan et al. and Tang et al. [42,47] showcased the power of CNNs in accurately diagnosing COPD from chest CT scans. The application of AI extends beyond traditional data types; researchers have successfully used audio data, with Rivas-Navarrete et al. [52] employing a 1D-CNN to detect COPD from cough and breath sounds with 89.5% accuracy.
Several studies reported exceptionally high, near-perfect diagnostic accuracy. El-Magd et al. [56], for example, achieved an accuracy of 100% using GoogleNet architecture with sensor data, while Mahmood et al. [57] reported similar success with a hybrid RF + MobileNetV2 model on audio recordings. The field also shows rapid innovation through the adoption of emerging technologies, such as the use of LLMs like GPT-4, which Zhang et al. [78] found to achieve an F1-score of 94.4% on EHRs.
Collectively, these findings illustrate a strong trend: researchers are successfully applying sophisticated algorithms to diverse data, from structured EHRs and images to unstructured audio and sensor data, to achieve rapid, accurate, and non-invasive COPD detection.

4.1.2. AI in COPD Outcome Prediction

Beyond diagnosis, a significant body of research focuses on leveraging AI to predict critical clinical outcomes, which is essential for proactive disease management and resource allocation. A primary target for prediction is AE-COPD, a major cause of hospitalization and disease progression. Researchers have employed a variety of models to tackle this, with Wang et al. [60] evaluating five different ML algorithms on electronic medical records. Their findings showed that an SVM achieved the best overall performance, with a high AUC of 0.90 and strong sensitivity (0.80) and specificity (0.83). Similarly, Kor et al. [59] tested four models on clinical data and found that a GBM provided the most robust results in predicting AECOPD, achieving an AUC of 0.832.
The use of non-traditional data for outcome prediction is a notable trend. Fernandez-Granero et al. [61] pioneered an approach using a decision tree forest classifier on daily home-recorded respiratory sounds. Their model was able to predict exacerbations with an average of 4.4 days’ notice, demonstrating an impressive accuracy of 87.8% and a high specificity of 95.9%. This highlights the potential for remote monitoring systems to provide early warnings for deteriorating patient conditions.
AI has also been applied to forecast other crucial long-term outcomes. For instance, González et al. [46] utilized a CNN on chest CT data from the large COPDGene cohort to predict mortality, achieving a respectable AUROC of 0.72. Predicting hospital readmission is another key application, with Huang et al. [63] using a novel combination of text mining on patient discharge reports and a decision tree algorithm to identify patients at high risk of 30-day readmission, achieving an accuracy of 77.2%.
Furthermore, AI models have been developed to assess disease severity, moving beyond simple prediction to a more nuanced understanding of a patient’s condition. Almeida et al. [65] developed a self-supervised DL model that could detect anomalies in paired inspiratory/expiratory CT scans, demonstrating its effectiveness with an AUC of 84.3% in the COPDGene dataset. Other studies, such as one by Dogu et al. [70], have focused on operational outcomes, using an integrated approach of statistical-based fuzzy cognitive maps and ANNs to predict the length of hospital stay with nearly 80% accuracy.
Collectively, these studies show that AI can effectively leverage diverse data sources, from structured clinical data and imaging to unstructured text and audio signals, to build powerful predictive tools for nearly every stage of the COPD management pathway.

4.1.3. AI in COPD Symptom Analysis

A more nascent but equally important application of AI is the analysis of specific COPD symptoms and the identification of distinct patient subgroups. This research moves beyond simple diagnosis towards a more nuanced, personalized understanding of the disease. AI has been effectively used for remote monitoring of symptoms, a critical component of modern chronic disease management. For instance, Yamane et al. [73] employed a RF model with data from a tri-axial accelerometer to successfully recognize daily activities that cause shortness of breath in patients. In a similar vein, Amado-Caballero et al. [149] utilized a CNN to analyze audio recordings, finding distinct cough patterns that could differentiate COPD patients from those with other respiratory conditions.
Beyond symptom tracking, AI is being used to quantify physiological markers and stratify patients. Weikert et al. [150] developed a 3D U-Net model to automatically quantify airway wall thickness from chest CT scans, achieving a high Dice score of 0.86, which is crucial for assessing disease progression. Furthermore, researchers are using unsupervised learning to uncover hidden patient phenotypes. Hirai et al. [74] applied the k-means clustering algorithm to patient data and successfully identified four distinct biological clusters, including a clear asthma-COPD overlap phenotype. This approach is vital for tailoring treatments to specific patient subgroups.
Finally, researchers are exploring behavioral and predictive analytics, such as the work by Pant et al. [75], who found that a ResNN outperformed other models in predicting smoking status based on demographic and clinical variables. These studies collectively exemplify AI’s growing role in moving beyond diagnostics to enable continuous remote monitoring and advance personalized medicine by stratifying patients based on their unique clinical and behavioral characteristics.
Despite these promising results across diagnostic, prognostic, and symptom-analysis applications, the consistently high performance metrics reported in many studies may reflect methodological limitations rather than true model robustness, underscoring the need for careful critical appraisal of the literature.

4.1.4. Cross-Study Methodological Concerns

Although the studies included in this review frequently report high or even near-perfect performance metrics, these results must be interpreted with caution. A substantial proportion of the identified models were trained on small or single-center datasets, a scenario that increases the risk of overfitting and limits the reliability of reported accuracy. In several cases, methodological descriptions were insufficient to rule out forms of data leakage—such as improper train-test splits, patient overlap across folds, or preprocessing steps applied before dataset partitioning—which can artificially inflate performance well beyond what would be achievable in real-world practice.
In addition, many studies relied heavily on accuracy as the primary metric, despite its well-recognized limitations in imbalanced clinical datasets. Only a minority of papers reported calibration, class-wise performance, or clinically relevant measures such as decision-curve analysis. Very few studies compared AI models against established clinical baselines—such as spirometric thresholds, radiologist assessment, or GOLD categorization—making it difficult to contextualize the practical value of the proposed models.
The scarcity of prospective validation, limited reporting of demographic representativeness, and near-absence of fairness or explainability analyses further highlight that the field remains in an early stage of maturity. While the literature demonstrates remarkable innovation and technical sophistication, these methodological weaknesses collectively undermine the robustness and trustworthiness of the evidence base and caution against assuming that current AI systems are ready for clinical deployment.

4.2. Limitations of This Review

This review has several limitations that should be acknowledged. Although the number of eligible studies was substantial, the marked methodological heterogeneity across publications precluded quantitative synthesis or meta-analysis. Additionally, despite the use of comprehensive and systematic search strategies, some relevant studies may have been inadvertently omitted, particularly those with insufficient methodological detail in titles or abstracts. Review papers and preprints were excluded a priori to ensure methodological rigor; however, exploratory searches suggested that existing abstracts mentioning both COPD and artificial intelligence were scarce and of limited clinical relevance. Importantly, the methodological heterogeneity across studies—including inconsistent reporting of preprocessing pipelines, training procedures, and validation strategies—limits the reliability of cross-study comparisons and may partially explain the implausibly high accuracies observed in several publications.
A further and critical limitation, evident from the results of this review, is the pervasive lack of external validation for the AI models identified. Most included studies reported performance metrics derived from internal validation, such as train-test splits on a single institutional dataset. As noted in the results, very few studies [48,58] explicitly stated that their models were tested on independent, external cohorts from different institutions or patient populations. This absence of external validation severely limits the generalizability of the reported findings. Models may demonstrate high accuracy due to overfitting to a specific dataset’s characteristics (e.g., patient demographics, CT scanner protocols), and their performance in a broader, real-world clinical setting remains unproven and likely lower than reported.
The search strategy was restricted to English-language publications, which introduces a potential language bias and may limit the generalizability of findings. Furthermore, given the exploration and heterogeneous nature of the included literature, a formal risk-of-bias assessment tool (e.g., PROBAST or QUADAS-2) was not applied. Similarly, no formal inter-rater reliability statistics were calculated for study screening or data extraction. Although discrepancies between reviewers were infrequent and resolved by consensus, this remains a methodological limitation.
Despite extending the search across multiple major databases, including PubMed, Scopus, IEEE Xplore, and Google Scholar, the number of eligible studies remained low. This highlights both the limited availability of robust, clinically validated AI applications in COPD and the urgent need for larger, multi-center, and genotype-diverse prospective studies to strengthen the evidence base and support clinical translation.
Additionally, demographic variables such as age, sex distribution, disease severity, and geographic origin were reported inconsistently across studies, limiting meaningful comparison and hindering assessment of model fairness and generalizability. Studies focusing primarily on other illnesses were excluded unless COPD-specific results were provided, a necessary restriction to maintain relevance but one that may have overlooked potentially transferable AI methodological insights. These limitations underscore opportunities to improve AI learning: future models would benefit from training on diverse, well-annotated populations, integrating multimorbidity profiles common in COPD, and adopting explainable AI approaches that enhance transparency and clinical trustworthiness.

4.3. Future Research Directions

Moving forward, the field would benefit from systematic efforts to address the methodological weaknesses identified in the current literature, including improved reporting standards, explicit safeguards against data leakage, rigorous calibration and fairness analyses, and direct comparisons against clinical benchmarks.
To translate the promise of AI into tangible clinical benefits for COPD patients, future research must prioritize the transition from theoretical models to real-world application. The current reliance on retrospective, single-center data necessitates a shift towards prospective, multi-center clinical trials, including Randomized Controlled Trials (RCTs), to validate the efficacy and generalizability of these AI tools. Concurrently, the “black box” nature of many complex algorithms, which hinder clinical trust, must be addressed by embracing Explainable AI (XAI). By integrating techniques like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), researchers can make model predictions transparent and interpretable, which is crucial for their adoption by healthcare providers who need to understand the reasoning behind an AI-driven recommendation.
Furthermore, the foundation of future research must be strengthened through improved data practices and methodological rigor. The complexity of COPD requires moving beyond siloed information to develop holistic, multi-modal AI models that integrate diverse data streams, including imaging, EHRs, genomics, and patient-generated data from wearables. This will enable a more precise and personalized approach to patient care. To accelerate progress and ensure reproducibility across the field, a collaborative effort is needed to establish standardized reporting guidelines (such as CONSORT-AI and STARD-AI), create public benchmark datasets, and promote the adoption of FAIR (Findable, Accessible, Interoperable, and Reusable) data principles. These foundational steps are essential for building a more robust, collaborative, and impactful research ecosystem.

5. Conclusions

This scoping review shows that AI is rapidly evolving and holds strong potential across the clinical management of COPD, from improving diagnostic accuracy to supporting outcome prediction and enabling remote symptom monitoring. These applications collectively point toward more proactive, personalized, and data-driven care. However, several challenges remain, including limited external validation of existing models, heterogeneous study methodologies, and inconsistent demographic reporting, all of which constrain the generalizability and clinical readiness of current AI approaches.
Looking ahead, progress will depend on conducting prospective multi-center studies, adopting explainable and multimodal AI approaches, and increasing access to high-quality and diverse benchmark datasets. Integrating multimorbidity profiles, improving demographic representation, and strengthening regulatory guidance will also be essential for fostering ethical and trustworthy implementation. Addressing these challenges will help ensure that advances in AI translate into meaningful improvements for patients and healthcare systems.

Author Contributions

Conceptualization, M.C.-G. and A.F.-V.; methodology, A.P., M.C.-G. and A.F.-V.; investigation, A.P., M.C.-G., C.R.-R., M.T.-D., A.C.-C. and A.F.-V.; writing—original draft preparation, A.P.; writing—review and editing, M.C.-G., A.C.-C. and A.F.-V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

This paper is part of the research conducted in fulfillment of the requirements for the Ph.D. degree of Alberto Pinheira.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
%EMPercent Emphysema
%LAV-950Percentage of Low Attenuation Volume at -950 Hounsfield units
ACOSAsthma-COPD Overlap Syndrome
AE-COPDAcute Exacerbation of COPD
AIArtificial Intelligence
ANNArtificial Neural Network
ARDAcute Respiratory Disease
ASVMAdaptive Support Vector Machine
AUC/AUROCArea Under the (Receiver Operating Characteristic) Curve
AUPRCArea Under the Precision-Recall Curve
BiGRUBidirectional Gated Recurrent Units
BiLSTMBidirectional Long Short-Term Memory
BODEBody-mass index, airflow Obstruction, Dyspnea, and Exercise capacity
CDSSClinical Decision Support Systems
CHFCongestive Heart Failure
CNNConvolutional Neural Network
COPDChronic Obstructive Pulmonary Disease
CRNNConvolutional Recurrent Neural Network
CTComputed Tomography
DLDeep Learning
DLSPDeep Learning-based Survival Prediction
DNNDeep Neural Network
DTDecision Tree
EHRElectronic Health Records
FCNFully Convolutional Network
FEV1Forced Expiratory Volume in 1 s
GB/GBCGradient Boosting/Gradient Boosting Classifier
GBDTGradient Boosting Decision Tree
GBMGradient Boosting Machine
GLMGeneralized Linear Model
GMMGaussian Mixture Model
GOLDGlobal Initiative for Chronic Obstructive Lung Disease
GPT-4Generative Pre-trained Transformer 4
Grad-CAMGradient-weighted Class Activation Mapping
HFL-COPRASHesitant Fuzzy Linguistic COmplex PRoportional ASsessment
ICBHIInternational Conference on Biomedical and Health Informatics dataset
KNN/kNNK-Nearest Neighbors
LACMLight Attention Connected Module
LIMELocal Interpretable Model-agnostic Explanations
LLMLarge Language Model
LRLogistic Regression
LS-SVMLeast-Squares Support Vector Machine
LSTMLong Short-Term Memory
MAEMean Absolute Error
MCCMatthews Correlation Coefficient
MFCCMel-Frequency Cepstral Coefficients
MLMachine Learning
MLNNMultilayer Neural Networks
MLPMultilayer Perceptron
MLRMultivariable Logistic Regression
MSEMean Squared Error
NBNaive Bayes
NGBoostNatural Gradient Boosting
NLPNatural Language Processing
NPVNegative Predictive Value
PCAPrincipal Component Analysis
PPGPhotoplethysmography
PPVPositive Predictive Value
PRMParametric Response Mapping
R2R-squared
ResNNResidual Neural Network
RFRandom Forest
RFCRandom Forest Classifier
RNNRecurrent Neural Network
SBFCMSubtractive-Based Fuzzy C-Means
SGDStochastic Gradient Descent
SHAPSHapley Additive exPlanations
STAINSpatio-Temporal Artificial Intelligence Network
SVEState Vector Estimation
SVMSupport Vector Machine
TD AUCTime-Dependent Area Under the Curve
TinyMLTiny Machine Learning
TPRTrue Positive Rate
XAIExplainable Artificial Intelligence
XGBoostExtreme Gradient Boosting

References

  1. GBD 2023 Causes of Death Collaborators. Global burden of 292 causes of death in 204 countries and territories and 660 subnational locations, 1990–2023: A systematic analysis for the Global Burden of Disease Study 2023. Lancet 2025, 406, 1811–1872. [Google Scholar] [CrossRef]
  2. Dicpinigaitis, P.V.; Lim, L.; Farmakidis, C. Cough syncope. Respir. Med. 2014, 108, 244–251. [Google Scholar] [CrossRef]
  3. Agustí, A.; Celli, B.R.; Criner, G.J.; Halpin, D.; Anzueto, A.; Barnes, P.; Bourbeau, J.; Han, M.K.; Martinez, F.J.; de Oca, M.M.; et al. Global Initiative for Chronic Obstructive Lung Disease 2023 Report: GOLD Executive Summary. Arch. Bronconeumol. 2023, 59, 232–248. [Google Scholar] [CrossRef] [PubMed]
  4. Miravitlles, M.; Ribera, A. Understanding the impact of symptoms on the burden of COPD. Respir. Res. 2017, 18, 67. [Google Scholar] [CrossRef]
  5. Wang, F.; Li, S.; Gao, Y.; Li, S. Computed tomography-based artificial intelligence in lung disease—Chronic obstructive pulmonary disease. MedComm–Future Med. 2024, 3, e73. [Google Scholar] [CrossRef]
  6. Topalovic, M.; Das, N.; Burgel, P.-R.; Daenen, M.; Derom, E.; Haenebalcke, C.; Janssen, R.; Kerstjens, H.A.; Liistro, G.; Louis, R.; et al. Artificial intelligence outperforms pulmonologists in the interpretation of pulmonary function tests. Eur. Respir. J. 2019, 53, 1801660. [Google Scholar] [CrossRef] [PubMed]
  7. Snyder, L.D.; DePietro, M.; Reich, M.; Neely, M.L.; Lugogo, N.; Pleasants, R.; Li, T.; Granovsky, L.; Brown, R.; Safioti, G. Predictive machine learning algorithm for COPD exacerbations using a digital inhaler with integrated sensors. BMJ Open Respir. Res. 2025, 12, e002577. [Google Scholar] [CrossRef]
  8. Wu, C.-Y.; Hsu, C.-N.; Wang, C.; Chien, J.-Y.; Wang, C.-C.; Lin, F.-J. Predicting outcomes after hospitalisation for COPD exacerbation using machine learning. ERJ Open Res. 2024, 11, 00651-2024. [Google Scholar] [CrossRef] [PubMed]
  9. Yin, H.; Wang, K.; Yang, R.; Tan, Y.; Li, Q.; Zhu, W.; Sung, S. A machine learning model for predicting acute exacerbation of in-home chronic obstructive pulmonary disease patients. Comput. Methods Programs Biomed. 2024, 246, 108005. [Google Scholar] [CrossRef]
  10. Pinheira, A.G.; Casal-Guisande, M.; Comesaña-Campos, A.; Dutra, I.; Nascimento, C.; Cerqueiro-Pequeño, J. Proposal and Definition of a Novel Intelligent System for the Diagnosis of Bipolar Disorder Based on the Use of Quick Response Codes Containing Single Nucleotide Polymorphism Data. In Proceedings of TEEM 2023; Gonçalves, J.A.d.C., Lima, J.L.S.d.M., Coelho, J.P., García-Peñalvo, F.J., García-Holgado, A., Eds.; Springer Nature: Singapore, 2024; pp. 241–250. [Google Scholar]
  11. Casal-Guisande, M.; Comesaña-Campos, A.; Núñez-Fernández, M.; Torres-Durán, M.; Fernández-Villar, A. Proposal and Definition of an Intelligent Clinical Decision Support System Applied to the Prediction of Dyspnea after 12 Months of an Acute Episode of COVID-19. Biomedicines 2024, 12, 854. [Google Scholar] [CrossRef]
  12. Muñoz-Martínez, M.-J.; Casal-Guisande, M.; Torres-Durán, M.; Sopeña, B.; Fernández-Villar, A. Clinical Characterization of Patients with Syncope of Unclear Cause Using Unsupervised Machine-Learning Tools: A Pilot Study. Appl. Sci. 2025, 15, 7176. [Google Scholar] [CrossRef]
  13. Casal-Guisande, M.; Villar-Aguilar, L.; Fernández-Villar, A.; García-Rodríguez, E.; Casal, A.; Torres-Durán, M. Applications of Artificial Intelligence in Alpha-1 Antitrypsin Deficiency: A Systematic Review from a Respiratory Medicine Perspective. Medicina 2025, 61, 1768. [Google Scholar] [CrossRef]
  14. Villar-Aguilar, L.; Casal-Guisande, M.; Fernández-Villar, A.; García-Rodríguez, E.; Priegue-Carrera, A.; Miravitlles, M.; Torres-Durán, M. Characterisation of patients with Alpha-1 antitrypsin deficiency using unsupervised machine learning tools. Respir. Med. 2025, 247, 108278. [Google Scholar] [CrossRef]
  15. Emssaad, I.; Ben-Bouazza, F.-E.; Tafala, I.; El Mezali, M.C.; Jioudi, B. Trustworthy Multimodal AI Agents for Early Breast Cancer Detection and Clinical Decision Support. Eng. Proc. 2025, 112, 52. [Google Scholar] [CrossRef]
  16. Chen, Y.; Wang, C.; Liu, X.; Duan, M.; Xiang, T.; Huang, H. Machine learning-based coronary heart disease diagnosis model for type 2 diabetes patients. Front. Endocrinol. 2025, 16, 1550793. [Google Scholar] [CrossRef] [PubMed]
  17. Baca, H.A.H.; Valdivia, F.d.L.P. Efficient Deep Learning-Based Arrhythmia Detection Using Smartwatch ECG Electrocardiograms. Sensors 2025, 25, 5244. [Google Scholar] [CrossRef] [PubMed]
  18. Sheng, J.; Zhang, Q.; Zhang, Q.; Wang, L.; Yang, Z.; Xin, Y.; Wang, B. A hybrid multimodal machine learning model for Detecting Alzheimer’s disease. Comput. Biol. Med. 2024, 170, 108035. [Google Scholar] [CrossRef]
  19. Altobelli, E.; Angeletti, P.M.; Ciancaglini, M.; Petrocelli, R. The Future of Breast Cancer Organized Screening Program Through Artificial Intelligence: A Scoping Review. Healthcare 2025, 13, 378. [Google Scholar] [CrossRef]
  20. Vrdoljak, J.; Boban, Z.; Vilović, M.; Kumrić, M.; Božić, J. A Review of Large Language Models in Medical Education, Clinical Decision Support, and Healthcare Administration. Healthcare 2025, 13, 603. [Google Scholar] [CrossRef]
  21. Casal-Guisande, M.; Mosteiro-Añón, M.; Torres-Durán, M.; Comesaña-Campos, A.; Fernández-Villar, A. Application of artificial intelligence for the detection of obstructive sleep apnea based on clinical and demographic data: A systematic review. Expert Rev. Respir. Med. 2025, 1–18. [Google Scholar] [CrossRef]
  22. Mosteiro-Añón, M.; Casal-Guisande, M.; Fernández-Villar, A.; Torres-Durán, M. AI-driven clinical decision support for early diagnosis and treatment planning in patients with suspected sleep apnea using clinical and demographic data before sleep studies. npj Prim. Care Respir. Med. 2025, 35, 51. [Google Scholar] [CrossRef] [PubMed]
  23. Shrishail Basvant, M.; Kamatchi, K.S.; Deepak, A.; Sharma, M.; Kumar, R.; Kumar Yadav, V.; Sankhyan, A.; Shrivastava, A. Fuzzy Logic Based Decision Support Systems for Medical Diagnosis. Int. J. Intell. Syst. Appl. Eng. 2024, 12, 1–7. [Google Scholar]
  24. Casal-Guisande, M.; Cerqueiro-Pequeño, J.; Bouza-Rodríguez, J.-B.; Comesaña-Campos, A. Integration of the Wang & Mendel Algorithm into the Application of Fuzzy Expert Systems to Intelligent Clinical Decision Support Systems. Mathematics 2023, 11, 2469. [Google Scholar] [CrossRef]
  25. Casal-Guisande, M.; Fernández-Villar, A.; Mosteiro-Añón, M.; Comesaña-Campos, A.; Cerqueiro-Pequeño, J.; Torres-Durán, M. Integrating tabular data through image conversion for enhanced diagnosis: A novel intelligent decision support system for stratifying obstructive sleep apnoea patients using convolutional neural networks. Digit. Health 2024, 10. [Google Scholar] [CrossRef]
  26. Chen, Z.; Hao, J.; Sun, H.; Li, M.; Zhang, Y.; Qian, Q. Applications of digital health technologies and artificial intelligence algorithms in COPD: Systematic review. BMC Med. Inform. Decis. Mak. 2025, 25, 77. [Google Scholar] [CrossRef] [PubMed]
  27. Wu, Q.; Guo, H.; Li, R.; Han, J. Deep learning and machine learning in CT-based COPD diagnosis: Systematic review and meta-analysis. Int. J. Med. Inform. 2025, 196, 105812. [Google Scholar] [CrossRef]
  28. A Smith, L.; Oakden-Rayner, L.; Bird, A.; Zeng, M.; To, M.-S.; Mukherjee, S.; Palmer, L.J. Machine learning and deep learning predictive models for long-term prognosis in patients with chronic obstructive pulmonary disease: A systematic review and meta-analysis. Lancet Digit. Health 2023, 5, e872–e881. [Google Scholar] [CrossRef] [PubMed]
  29. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  30. Levac, D.; Colquhoun, H.; O’Brien, K.K. Scoping studies: Advancing the methodology. Implement. Sci. 2010, 5, 69. [Google Scholar] [CrossRef]
  31. Aromataris, E.; Lockwood, C.; Porritt, K.; Pilla, B.; Jordan, Z. JBI Manual for Evidence Synthesis; JBI: Adelaide, Australia, 2024. [Google Scholar] [CrossRef]
  32. Ouzzani, M.; Hammady, H.; Fedorowicz, Z.; Elmagarmid, A. Rayyan—A web and mobile app for systematic reviews. Syst. Rev. 2016, 5, 210. [Google Scholar] [CrossRef]
  33. Joumaa, H.; Sigogne, R.; Maravic, M.; Perray, L.; Bourdin, A.; Roche, N. Artificial intelligence to differentiate asthma from COPD in medico-administrative databases. BMC Pulm. Med. 2022, 22, 357. [Google Scholar] [CrossRef]
  34. Naqvi, S.Z.H.; Choudhry, M.A. An Automated System for Classification of Chronic Obstructive Pulmonary Disease and Pneumonia Patients Using Lung Sound Analysis. Sensors 2020, 20, 6512. [Google Scholar] [CrossRef]
  35. Spathis, D.; Vlamos, P. Diagnosing asthma and chronic obstructive pulmonary disease with machine learning. Health Inform. J. 2017, 25, 811–827. [Google Scholar] [CrossRef]
  36. Wu, Z.; Zhao, H.; Yang, P.; He, B.; Shen, N. The assistant diagnosis system for chronic obstructive pulmonary disease based on random forest with quantitative CT parameters. In Proceedings of the 2022 8th International Conference on Control, Decision and Information Technologies, CoDIT 2022, Istanbul, Turkey, 17–20 May 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 1348–1352. [Google Scholar] [CrossRef]
  37. Jayadharshini, P.; Santhiya, S.; Rakshitaa, J.; Nithika, K.; Kannan, N.; Tharun, P. Advancing COPD Diagnosis through Deep Learning, GANs, and Chest X-Ray Analysis for Precise Detection and Severity. In Proceedings of the 2023 Intelligent Computing and Control for Engineering and Business Systems (ICCEBS), Chennai, India, 14–15 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  38. Gökçen, A. Computer-Aided Diagnosis System for Chronic Obstructive Pulmonary Disease Using Empirical Wavelet Transform on Auscultation Sounds. Comput. J. 2021, 64, 1775–1783. [Google Scholar] [CrossRef]
  39. Saad, T.; Pandey, R.; Padarya, S.; Singh, P.; Singh, S.; Mishra, N. Application of Artificial Intelligence in the Interpretation of Pulmonary Function Tests. Cureus 2025, 17, e82056. [Google Scholar] [CrossRef]
  40. Moran, I.; Altilar, D.T.; Ucar, M.K.; Bilgin, C.; Bozkurt, M.R. Deep Transfer Learning for Chronic Obstructive Pulmonary Disease Detection Utilizing Electrocardiogram Signals. IEEE Access 2023, 11, 40629–40644. [Google Scholar] [CrossRef]
  41. Lin, X.; Lei, Y.; Chen, J.; Xing, Z.; Yang, T.; Wang, Q.; Wang, C. A Case-Finding Clinical Decision Support System to Identify Subjects with Chronic Obstructive Pulmonary Disease Based on Public Health Data. Tsinghua Sci. Technol. 2023, 28, 525–540. [Google Scholar] [CrossRef]
  42. Guan, Y.; Zhang, D.; Zhou, X.; Xia, Y.; Lu, Y.; Zheng, X.; He, C.; Liu, S.; Fan, L. Comparison of deep-learning and radiomics-based machine-learning methods for the identification of chronic obstructive pulmonary disease on low-dose computed tomography images. Quant. Imaging Med. Surg. 2024, 14, 2485–2498. [Google Scholar] [CrossRef] [PubMed]
  43. Fang, Y.; Wang, H.; Wang, L.; Di, R.; Song, Y. Diagnosis of COPD Based on a Knowledge Graph and Integrated Model. IEEE Access 2019, 7, 46004–46013. [Google Scholar] [CrossRef]
  44. Raju, O.; Ramesh, D. An Enhanced Deep Learning Approach for Disease Classification from Respiratory Sound. In Proceedings of the 2nd IEEE International Conference on Device Intelligence, Computing and Communication Technologies, DICCT 2024, Dehradun, India, 15–16 March 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 264–269. [Google Scholar] [CrossRef]
  45. Sahu, P.; Jha, S.; Kumar, S. Optimized 1D CNNs for Enhanced Early Detection and Accurate Prediction of COPD and Other Pulmonary Diseases. In Proceedings of the 2024 IEEE Region 10 Symposium, TENSYMP 2024, New Delhi, India, 27–29 September 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  46. González, G.; Ash, S.Y.; Vegas-Sánchez-Ferrero, G.; Onieva, J.O.; Rahaghi, F.N.; Ross, J.C.; Díaz, A.; Estépar, R.S.J.; Washko, G.R.; Copdgene, F.T. Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am. J. Respir. Crit. Care Med. 2018, 197, 193–203. [Google Scholar] [CrossRef] [PubMed]
  47. Tang, L.Y.W.; O Coxson, H.; Lam, S.; Leipsic, J.; Tam, R.C.; Sin, D.D. Towards large-scale case-finding: Training and validation of residual networks for detection of chronic obstructive pulmonary disease using low-dose CT. Lancet Digit. Heal. 2020, 2, e259–e267. [Google Scholar] [CrossRef] [PubMed]
  48. Cheplygina, V.; Peña, I.P.; Pedersen, J.J.H.; Lynch, D.A.; Sørensen, L.; de Bruijne, M. Transfer learning for multi-center classification of chronic obstructive pulmonary disease. arXiv 2017, arXiv:1701.05013. [Google Scholar] [CrossRef]
  49. Zhang, P.; Swaminathan, A.; Uddin, A.A. Pulmonary disease detection and classification in patient respiratory audio files using long short-term memory neural networks. Front. Med. 2023, 10, 1269784. [Google Scholar] [CrossRef]
  50. Zheng, H.; Hu, Y.; Dong, L.; Shu, Q.; Zhu, M.; Li, Y.; Chen, C.; Gao, H.; Yang, L. Predictive diagnosis of chronic obstructive pulmonary disease using serum metabolic biomarkers and least-squares support vector machine. J. Clin. Lab. Anal. 2020, 35, e23641. [Google Scholar] [CrossRef]
  51. Vollmer, A.; Vollmer, M.; Lang, G.; Straub, A.; Shavlokhova, V.; Kübler, A.; Gubik, S.; Brands, R.; Hartmann, S.; Saravi, B. Associations between Periodontitis and COPD: An Artificial Intelligence-Based Analysis of NHANES III. J. Clin. Med. 2022, 11, 7210. [Google Scholar] [CrossRef] [PubMed]
  52. Rivas-Navarrete, J.A.; Pérez-Espinosa, H.; Padilla-Ortiz, A.L.; Rodríguez-González, A.Y.; García-Cambero, D.C. Edge Computing System for Automatic Detection of Chronic Respiratory Diseases Using Audio Analysis. J. Med Syst. 2025, 49, 33. [Google Scholar] [CrossRef]
  53. Melekoglu, E.; Kocabicak, U.; Uçar, M.K.; Bilgin, C.; Bozkurt, M.R.; Cunkas, M. A new diagnostic method for chronic obstructive pulmonary disease using the photoplethysmography signal and hybrid artificial intelligence. PeerJ Comput. Sci. 2022, 8, e1188. [Google Scholar] [CrossRef]
  54. Kousalya, K.; Dinesh, K.; Krishnakumar, B.; Kavyapriya, J.G.; Kowsika, C.; Ponmathi, K. Prediction of Optimal Algorithm for Diagnosis of Chronic Obstructive Pulmonary Disease. In Proceedings of the 2023 2nd International Conference on Electrical, Electronics, Information and Communication Technologies, ICEEICT 2023, Trichirappalli, India, 5–7 April 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  55. Archana, M.; Dash, S.; Sindhu Shree, H.R.; Shree Deeksha, V. Predicting the Severity of Pulmonary Disease from Respiratory Sounds using ML Algorithms. In Proceedings of the 6th International Conference on Mobile Computing and Sustainable Informatics, ICMCSI 2025, Goathgaun, Nepal, 7–8 January 2025; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2025; pp. 1750–1755. [Google Scholar] [CrossRef]
  56. El-Magd, L.M.A.; Dahy, G.; Farrag, T.A.; Darwish, A.; Hassnien, A.E. An interpretable deep learning based approach for chronic obstructive pulmonary disease using explainable artificial intelligence. Int. J. Inf. Technol. 2024, 17, 4077–4092. [Google Scholar] [CrossRef]
  57. Mahmood, A.F.; Alkababji, A.M.; Daood, A. Resilient embedded system for classification respiratory diseases in a real time. Biomed. Signal Process. Control. 2023, 90, 105876. [Google Scholar] [CrossRef]
  58. Sun, J.; Liao, X.; Yan, Y.; Zhang, X.; Sun, J.; Tan, W.; Liu, B.; Wu, J.; Guo, Q.; Gao, S.; et al. Correction to: Detection and staging of chronic obstructive pulmonary disease using a computed tomography–based weakly supervised deep learning approach. Eur. Radiol. 2022, 32, 5785. [Google Scholar] [CrossRef] [PubMed]
  59. Kor, C.-T.; Li, Y.-R.; Lin, P.-R.; Lin, S.-H.; Wang, B.-Y.; Lin, C.-H. Explainable Machine Learning Model for Predicting First-Time Acute Exacerbation in Patients with Chronic Obstructive Pulmonary Disease. J. Pers. Med. 2022, 12, 228. [Google Scholar] [CrossRef] [PubMed]
  60. Wang, C.; Chen, X.; Du, L.; Zhan, Q.; Yang, T.; Fang, Z. Comparison of machine learning algorithms for the identification of acute exacerbations in chronic obstructive pulmonary disease. Comput. Methods Programs Biomed. 2020, 188, 105267. [Google Scholar] [CrossRef]
  61. Fernandez-Granero, M.A.; Sanchez-Morillo, D.; Leon-Jimenez, A. An artificial intelligence approach to early predict symptom-based exacerbations of COPD. Biotechnol. Biotechnol. Equip. 2018, 32, 778–784. [Google Scholar] [CrossRef]
  62. Shah, S.A.; Velardo, C.; Farmer, A.; Tarassenko, L. Exacerbations in Chronic Obstructive Pulmonary Disease: Identification and Prediction Using a Digital Health System. J. Med. Internet Res. 2017, 19, e69. [Google Scholar] [CrossRef] [PubMed]
  63. Huang, C.D.; Goo, J.; Behara, R.S.; Agarwal, A. Clinical Decision Support System for Managing COPD-Related Readmission Risk. Inf. Syst. Front. 2018, 22, 735–747. [Google Scholar] [CrossRef]
  64. Zheng, Y.; Xu, Z.; He, Y.; Liao, H. Severity assessment of chronic obstructive pulmonary disease based on hesitant fuzzy linguistic COPRAS method. Appl. Soft Comput. 2018, 69, 60–71. [Google Scholar] [CrossRef]
  65. Almeida, S.D.; Norajitra, T.; Lüth, C.T.; Wald, T.; Weru, V.; Nolden, M.; Jäger, P.F.; von Stackelberg, O.; Heußel, C.P.; Weinheimer, O.; et al. Prediction of disease severity in COPD: A deep learning approach for anomaly-based quantitative assessment of chest CT. Eur. Radiol. 2023, 34, 4379–4392. [Google Scholar] [CrossRef]
  66. Nam, J.G.; Kang, H.-R.; Lee, S.M.; Kim, H.; Rhee, C.; Goo, J.M.; Oh, Y.-M.; Lee, C.-H.; Park, C.M. Deep Learning Prediction of Survival in Patients with Chronic Obstructive Pulmonary Disease Using Chest Radiographs. Radiology 2022, 305, 199–208. [Google Scholar] [CrossRef]
  67. Enríquez-Rodríguez, C.J.; Casadevall, C.; Faner, R.; Pascual-Guardia, S.; Castro-Acosta, A.; López-Campos, J.L.; Peces-Barba, G.; Seijo, L.; Caguana-Vélez, O.A.; Monsó, E.; et al. A Pilot Study on Proteomic Predictors of Mortality in Stable COPD. Cells 2024, 13, 1351. [Google Scholar] [CrossRef]
  68. Young, A.L.; Bragman, F.J.S.; Rangelov, B.; Han, M.K.; Galbán, C.J.; Lynch, D.A.; Hawkes, D.J.; Alexander, D.C.; Hurst, J.R. Disease Progression Modeling in Chronic Obstructive Pulmonary Disease. Am. J. Respir. Crit. Care Med. 2020, 201, 294–302. [Google Scholar] [CrossRef]
  69. Casal-Guisande, M.; Represas-Represas, C.; Golpe, R.; Comesaña-Campos, A.; Fernández-García, A.; Torres-Durán, M.; Fernández-Villar, A. Improving End-of-Life Care for COPD Patients: Design and Development of an Intelligent Clinical Decision Support System to Predict One-Year Mortality After Acute Exacerbations. Int. J. Intell. Syst. 2025, 2025, 5556476. [Google Scholar] [CrossRef]
  70. Dogu, E.; Albayrak, Y.E.; Tuncay, E. Length of hospital stay prediction with an integrated approach of statistical-based fuzzy cognitive maps and artificial neural networks. Med. Biol. Eng. Comput. 2021, 59, 483–496. [Google Scholar] [CrossRef] [PubMed]
  71. Wang, X.; Qiao, Y.; Cui, Y.; Ren, H.; Zhao, Y.; Linghu, L.; Ren, J.; Zhao, Z.; Chen, L.; Qiu, L. An explainable artificial intelligence framework for risk prediction of COPD in smokers. BMC Public Health 2023, 23, 2164. [Google Scholar] [CrossRef] [PubMed]
  72. Shaikat, T.A.; Chowdhury, S.H.; Shovon; Hossain, M.M.; Hussain, M.I.; Mamun, M. Explainability Elevated Obstructive Pulmonary Disease Care: Severity Classification, Quality of Life Prediction, and Treatment Impact Assessment. In Proceedings of the 2025 International Conference on Electrical, Computer and Communication Engineering (ECCE), Chittagong, Bangladesh, 13–15 February 2025; pp. 1–6. [Google Scholar] [CrossRef]
  73. Yamane, T.; Yamasaki, Y.; Nakashima, W.; Morita, M. Tri-Axial Accelerometer-Based Recognition of Daily Activities Causing Shortness of Breath in COPD Patients. Phys. Act. Health 2023, 7, 64–75. [Google Scholar] [CrossRef]
  74. Hirai, K.; Shirai, T.; Suzuki, M.; Akamatsu, T.; Suzuki, T.; Hayashi, I.; Yamamoto, A.; Akita, T.; Morita, S.; Asada, K.; et al. A clustering approach to identify and characterize the asthma and chronic obstructive pulmonary disease overlap phenotype. Clin. Exp. Allergy 2017, 47, 1374–1382. [Google Scholar] [CrossRef] [PubMed]
  75. Pant, S.; Yang, H.J.; Cho, S.; Ryu, E.; Choi, J.Y. Development of a deep learning model to predict smoking status in patients with chronic obstructive pulmonary disease: A secondary analysis of cross-sectional national survey. Digit. Health 2025, 11. [Google Scholar] [CrossRef]
  76. Peng, G.; Liu, J.; Lu, Z. COPD Healthcare Platform Based on IoT and AI. In Proceedings of the 2024 International Symposium on Internet of Things and Smart Cities, ISITSC 2024, Nanjing, China, 21–23 June 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 49–53. [Google Scholar] [CrossRef]
  77. Heyman, E.T.; Ashfaq, A.; Ekelund, U.; Ohlsson, M.; Björk, J.; Schubert, A.M.; Lingman, M.; Khoshnood, A.M. Utilizing artificial intelligence and medical experts to identify predictors for common diagnoses in dyspneic adults: A cross-sectional study of consecutive emergency department patients from Southern Sweden. Int. J. Med. Inform. 2025, 202, 105969. [Google Scholar] [CrossRef]
  78. Zhang, J.; Sun, K.; Jagadeesh, A.; Falakaflaki, P.; Kayayan, E.; Tao, G.; Ghahfarokhi, M.H.; Gupta, D.; Gupta, A.; Gupta, V.; et al. The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant. J. Am. Med. Inform. Assoc. 2024, 31, 1884–1891. [Google Scholar] [CrossRef]
  79. Maldonado-Franco, A.; Giraldo-Cadavid, L.F.; Tuta-Quintero, E.; Cagy, M.; Goyes, A.R.B.; Botero-Rosas, D. Curve-Modelling and Machine Learning for a Better COPD Diagnosis. Int. J. Chronic Obstr. Pulm. Dis. 2024, 19, 1333–1343. [Google Scholar] [CrossRef]
  80. Almeida, S.D.; Norajitra, T.; Lüth, C.T.; Wald, T.; Weru, V.; Nolden, M.; Jäger, P.F.; von Stackelberg, O.; Heußel, C.P.; Weinheimer, O.; et al. Capturing COPD heterogeneity: Anomaly detection and parametric response mapping comparison for phenotyping on chest computed tomography. Front. Med. 2024, 11, 1360706. [Google Scholar] [CrossRef]
  81. Davies, H.J.; Hammour, G.; Xiao, H.; Bachtiger, P.; Larionov, A.; Molyneaux, P.L.; Peters, N.S.; Mandic, D.P. Physically Meaningful Surrogate Data for COPD. IEEE Open J. Eng. Med. Biol. 2024, 5, 148–156. [Google Scholar] [CrossRef] [PubMed]
  82. Albiges, T.; Sabeur, Z.; Arbab-Zavar, B. Compressed Sensing Data with Performing Audio Signal Reconstruction for the Intelligent Classification of Chronic Respiratory Diseases. Sensors 2023, 23, 1439. [Google Scholar] [CrossRef] [PubMed]
  83. Bracht, T.; Kleefisch, D.; Schork, K.; Witzke, K.E.; Chen, W.; Bayer, M.; Hovanec, J.; Johnen, G.; Meier, S.; Ko, Y.-D.; et al. Plasma Proteomics Enable Differentiation of Lung Adenocarcinoma from Chronic Obstructive Pulmonary Disease (COPD). Int. J. Mol. Sci. 2022, 23, 11242. [Google Scholar] [CrossRef]
  84. Zafari, H.; Langlois, S.; Zulkernine, F.; Kosowan, L.; Singer, A. AI in predicting COPD in the Canadian population. Biosystems 2022, 211, 104585. [Google Scholar] [CrossRef]
  85. Choi, Y.; Lee, H. Interpretation of lung disease classification with light attention connected module. Biomed. Signal Process. Control. 2023, 84, 104695. [Google Scholar] [CrossRef]
  86. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. Appl. Sci. 2022, 12, 3877. [Google Scholar] [CrossRef]
  87. McDowell, A.; Kang, J.; Yang, J.; Jung, J.; Oh, Y.-M.; Kym, S.-M.; Shin, T.-S.; Kim, T.-B.; Jee, Y.-K.; Kim, Y.-K. Machine-learning algorithms for asthma, COPD, and lung cancer risk assessment using circulating microbial extracellular vesicle data and their application to assess dietary effects. Exp. Mol. Med. 2022, 54, 1586–1595. [Google Scholar] [CrossRef]
  88. Yahyaoui, A.; Yumusak, N. Decision support system based on the support vector machines and the adaptive support vector machines algorithm for solving chest disease diagnosis problems. Biomed. Res. 2018, 29, 1474–1480. [Google Scholar] [CrossRef]
  89. Saleh, L.; Mcheick, H.; Ajami, H.; Mili, H.; Dargham, J. Comparison of machine learning algorithms to increase prediction accuracy of COPD domain. In Enhanced Quality of Life and Smart Living; Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2017; Volume 10461 LNCS, pp. 247–254. [Google Scholar] [CrossRef]
  90. Rukumani Khandhan, C.; Gothai, E.; Arun, G.; Bharathi Priya, R. Early Prediction of Chronic Obstructive Pulmonary Disease: A Deep Transfer Learning Approach. In Proceedings of the 2nd International Conference on Self Sustainable Artificial Intelligence Systems, ICSSAS 2024, Erode, India, 23–25 October 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 379–387. [Google Scholar] [CrossRef]
  91. Hung, S.H.; Tseng, T.-I. Exploring Cough Sounds in Chronic Obstructive Pulmonary Disease (COPD) Patients Using Machine Learning Techniques. In Proceedings of the 2024 IEEE 6th Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability, ECBIOS 2024, Tainan, Taiwan, 14–16 June 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024; pp. 371–373. [Google Scholar] [CrossRef]
  92. Chawla, J.; Walia, N.K. A Novel Artificial Intelligence based Approach for Diagnosis of Chronic Obstructive Pulmonary Disease. In Proceedings of the 2024 3rd International Conference for Innovation in Technology, INOCON 2024, Bangalore, India, 1–3 March 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  93. Zhu, Y.; Xu, W. Research on Classification of Respiratory Diseases Based on BiGRU and CNN Cascade Neural Network. In Proceedings of the 2021 2nd International Conference on Artificial Intelligence and Computer Engineering, ICAICE 2021, Hangzhou, China, 5–7 November 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 468–471. [Google Scholar] [CrossRef]
  94. Türkçetin, A.Ö.; Koç, T.; Çilekar, Ş.; İnkaya, Y.; Kaya, F.; Konya, P.Ş. Artificial Intelligence Approach in the Detection of Lung Diseases Developing Post-COVID-19 with Lung Images. In Proceedings of the 14th International Conference on Electrical and Electronics Engineering, ELECO 2023, Bursa, Turkiye, 30 November–2 December 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  95. Wang, Q.; Wang, H.; Wang, L.; Yu, F. Diagnosis of Chronic Obstructive Pulmonary Disease Based on Transfer Learning. IEEE Access 2020, 8, 47370–47383. [Google Scholar] [CrossRef]
  96. Choudhary, S.; Singh, A.P.; Tiwari, A. Development of Ensemble Learning Algorithms for Pulmonary Disease Detection System. In Proceedings of the 2025 International Conference on Data Science, Agents and Artificial Intelligence, ICDSAAI 2025, Chennai, India, 28–29 March 2025; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2025. [Google Scholar] [CrossRef]
  97. Ooko, S.O.; Mukanyiligira, D.; Munyampundu, J.P.; Nsenga, J. Synthetic Exhaled Breath Data-Based Edge AI Model for the Prediction of Chronic Obstructive Pulmonary Disease. In Proceedings of the 2021 International Conference on Computing and Communications Applications and Technologies, I3CAT 2021, Ipswich, UK, 15 September 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 76–81. [Google Scholar] [CrossRef]
  98. Rohit, V.P.; Swaroop, M.V.V.S.; Mithin Dev, A.; Suriya, K.P.; Veluppal, A. Leveraging Spectral Signatures and Attention Enhanced BiLSTM Network for Comprehensive Respiratory Sound Analysis. In Proceedings of the 11th International Conference on Bio Signals, Images, and Instrumentation, ICBSII 2025, Chennai, India, 26–28 March 2025; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2025. [Google Scholar] [CrossRef]
  99. Islam, M.B.E.; Khan, M.S. Classification of Lung Diseases Through Artificial Intelligence Models: A Multi-Dataset Evaluation. In Proceedings of the 2024 IEEE International Conference on Signal Processing, Communications and Computing, ICSPCC 2024, Bali, Indonesia, 19–22 August 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  100. Ikechukwu, A.V.; Murali, S. xAI: An Explainable AI Model for the Diagnosis of COPD from CXR Images. In Proceedings of the 2023 IEEE 2nd International Conference on Data, Decision and Systems, ICDDS 2023, Mangaluru, India, 1–2 December 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  101. Jenefa, A.; Edward Naveen, V.; Ebenezer, V.; Rajkumar, K.; Vargheese, M.; Jeyaraj, K.A. COPD Assessment Through Multimodal Analysis: Exploiting the Synergy of CNNs and LSTM Networks. In Proceedings of the International Conference on Self Sustainable Artificial Intelligence Systems, ICSSAS 2023, Erode, India, 18–20 October 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 13–18. [Google Scholar] [CrossRef]
  102. Anupama, H.S.; Pradeep, K.R.; Shreeya, G.; Rao, P.; Tejasvi, S.K. Detection of Chronic Lung Disorders using Deep Learning. In Proceedings of the 2022 4th International Conference on Cognitive Computing and Information Processing, CCIP 2022, Bengaluru, India, 23–24 December 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  103. Ooko, S.O.; Mukanyiligira, D.; Munyampundu, J.P.; Nsenga, J. Edge AI-based Respiratory Disease Recognition from Exhaled Breath Signatures. In Proceedings of the 2021 IEEE Jordan International Joint Conference on Electrical Engineering and Information Technology, JEEIT 2021, Amman, Jordan, 16–18 November 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 89–94. [Google Scholar] [CrossRef]
  104. Sanjana, J.; Naik, P.P.; Padukudru, M.A.; Koolagudi, S.G.; Rajan, J. Attention-Based CRNN Models for Identification of Respiratory Diseases from Lung Sounds. In Proceedings of the 2023 14th International Conference on Computing Communication and Networking Technologies, ICCCNT 2023, Delhi, India, 6–8 July 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  105. Jha, S.; Sahu, P.; Kumar, S. Enhanced Predictive Modeling Techniques for Early Detection of COPD Utilizing 1D Convolutional Neural Networks. In Proceedings of the 2024 15th International Conference on Computing Communication and Networking Technologies, ICCCNT 2024, Kamand, India, 24–28 June 2024; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2024. [Google Scholar] [CrossRef]
  106. Mridha, K.; Sarkar, S.; Kumar, D. Respiratory Disease Classification by CNN using MFCC. In Proceedings of the 2021 IEEE 6th International Conference on Computing, Communication and Automation, ICCCA 2021, Arad, Romania, 17–19 December 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021; pp. 517–523. [Google Scholar] [CrossRef]
  107. Ikechukwu, A.V.; Murali, S.; Honnaraju, B. COPDNet: An Explainable ResNet50 Model for the Diagnosis of COPD from CXR Images. In Proceedings of the 2023 IEEE 4th Annual Flagship India Council International Subsections Conference: Computational Intelligence and Learning Systems, INDISCON 2023, Mysore, India, 5–7 August 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023. [Google Scholar] [CrossRef]
  108. Han, T.T.; Le Trung, K.; Nguyen Anh, P.; Do Trung, A. Hierarchical Embedded System Based on FPGA for Classification of Respiratory Diseases. IEEE Access 2025, 13, 93017–93032. [Google Scholar] [CrossRef]
  109. Dhar, J. Multistage Ensemble Learning Model with Weighted Voting and Genetic Algorithm Optimization Strategy for Detecting Chronic Obstructive Pulmonary Disease. IEEE Access 2021, 9, 48640–48657. [Google Scholar] [CrossRef]
  110. Khade, A.A. A hybrid model for predicting COPD using CNN. In Proceedings of the 2023 International Conference on Advancement in Computation and Computer Technologies, InCACCT 2023, Gharuan, India, 5–6 May 2023; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2023; pp. 52–57. [Google Scholar] [CrossRef]
  111. Li, J.; Wang, C.; Chen, J.; Zhang, H.; Dai, Y.; Wang, L.; Wang, L.; Nandi, A.K. Explainable CNN with Fuzzy Tree Regularization for Respiratory Sound Analysis. IEEE Trans. Fuzzy Syst. 2022, 30, 1516–1528. [Google Scholar] [CrossRef]
  112. Bulucu, P.; Nakip, M.; Güzeliș, C. Multi-Sensor E-Nose Based on Online Transfer Learning Trend Predictive Neural Network. IEEE Access 2024, 12, 71442–71452. [Google Scholar] [CrossRef]
  113. Aulia, D.; Sarno, R.; Hidayati, S.C.; Rosyid, A.N.; Rivai, M. Identification of chronic obstructive pulmonary disease using graph convolutional network in electronic nose. Indones. J. Electr. Eng. Comput. Sci. 2024, 34, 264–275. [Google Scholar] [CrossRef]
  114. Puchakayala, P.R.A.; Sthanam, V.L.; Nakhmani, A.; Chaudhary, M.F.A.; Puliyakote, A.K.; Reinhardt, J.M.; Zhang, C.; Bhatt, S.P.; Bodduluri, S. Radiomics for Improved Detection of Chronic Obstructive Pulmonary Disease in Low-Dose and Standard-Dose Chest CT Scans. Radiology 2023, 307, e222998. [Google Scholar] [CrossRef]
  115. Zhang, Y.; Xia, R.; Lv, M.; Li, Z.; Jin, L.; Chen, X.; Han, Y.; Shi, C.; Jiang, Y.; Jin, S. Machine-Learning Algorithm-Based Prediction of Diagnostic Gene Biomarkers Related to Immune Infiltration in Patients with Chronic Obstructive Pulmonary Disease. Front. Immunol. 2022, 13, 740513. [Google Scholar] [CrossRef]
  116. Zhang, K.; Li, Z.; Zhang, J.; Zhao, D.; Pi, Y.; Shi, Y.; Wang, R.; Chen, P.; Li, C.; Chen, G.; et al. Biodegradable Smart Face Masks for Machine Learning-Assisted Chronic Respiratory Disease Diagnosis. ACS Sens. 2022, 7, 3135–3143. [Google Scholar] [CrossRef]
  117. Wu, C.-T.; Li, G.-H.; Huang, C.-T.; Cheng, Y.-C.; Chen, C.-H.; Chien, J.-Y.; Kuo, P.-H.; Kuo, L.-C.; Lai, F. Acute exacerbation of a chronic obstructive pulmonary disease prediction system using wearable device data, machine learning, and deep learning: Development and cohort study. JMIR mHealth uHealth 2021, 9, e22591. [Google Scholar] [CrossRef]
  118. Srivastava, A.; Jain, S.; Miranda, R.; Patil, S.; Pandya, S.; Kotecha, K. Deep learning based respiratory sound analysis for detection of chronic obstructive pulmonary disease. PeerJ Comput. Sci. 2021, 7, e369. [Google Scholar] [CrossRef] [PubMed]
  119. Zakaria, N.; Mohamed, F.; Abdelghani, R.; Sundaraj, K. Three ResNet Deep Learning Architectures Applied in Pulmonary Pathologies Classification. In Proceedings of the International Conference on Artificial Intelligence for Cyber Security Systems and Privacy, AI-CSP 2021, El Oued, Algeria, 20–21 November 2021; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2021. [Google Scholar] [CrossRef]
  120. Bodduluri, S.; Nakhmani, A.; Reinhardt, J.M.; Wilson, C.G.; McDonald, M.L.; Rudraraju, R.; Jaeger, B.C.; Bhakta, N.R.; Castaldi, P.J.; Sciurba, F.C.; et al. Deep neural network analyses of spirometry for structural phenotyping of chronic obstructive pulmonary disease. JCI Insight 2020, 5, 132781. [Google Scholar] [CrossRef] [PubMed]
  121. Ma, X.; Wu, Y.; Zhang, L.; Yuan, W.; Yan, L.; Fan, S.; Lian, Y.; Zhu, X.; Gao, J.; Zhao, J.; et al. Comparison and development of machine learning tools for the prediction of chronic obstructive pulmonary disease in the Chinese population. J. Transl. Med. 2020, 18, 146. [Google Scholar] [CrossRef] [PubMed]
  122. Basu, V.; Rana, S. Respiratory diseases recognition through respiratory sound with the help of deep neural network. In Proceedings of the 4th International Conference on Computational Intelligence and Networks, CINE 2020, Kolkata, India, 27–29 February 2020; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2020. [Google Scholar] [CrossRef]
  123. Altan, G.; Kutlu, Y.; Allahverdi, N. Deep Learning on Computerized Analysis of Chronic Obstructive Pulmonary Disease. IEEE J. Biomed. Health Inform. 2020, 24, 1344–1350. [Google Scholar] [CrossRef] [PubMed]
  124. Xu, Q.; Tang, W.; Teng, F.; Peng, W.; Zhang, Y.; Li, W.; Wen, C.; Guo, J. Intelligent Syndrome Differentiation of Traditional Chinese Medicine by ANN: A Case Study of Chronic Obstructive Pulmonary Disease. IEEE Access 2019, 7, 76167–76175. [Google Scholar] [CrossRef]
  125. Haider, N.S.; Singh, B.K.; Periyasamy, R.; Behera, A.K. Respiratory Sound Based Classification of Chronic Obstructive Pulmonary Disease: A Risk Stratification Approach in Machine Learning Paradigm. J. Med. Syst. 2019, 43, 255. [Google Scholar] [CrossRef]
  126. Gupta, N.; Gupta, D.; Khanna, A.; Rebouças Filho, P.P.; de Albuquerque, V.H.C. Evolutionary algorithms for automatic lung disease detection. Measurement 2019, 140, 590–608. [Google Scholar] [CrossRef]
  127. Badnjevic, A.; Gurbeta, L.; Custovic, E. An Expert Diagnostic System to Automatically Identify Asthma and Chronic Obstructive Pulmonary Disease in Clinical Settings. Sci. Rep. 2018, 8, 11645. [Google Scholar] [CrossRef]
  128. Windmon, A.; Minakshi, M.; Bharti, P.; Chellappan, S.; Johansson, M.; Jenkins, B.A.; Athilingam, P.R. TussisWatch: A Smart-Phone System to Identify Cough Episodes as Early Symptoms of Chronic Obstructive Pulmonary Disease and Congestive Heart Failure. IEEE J. Biomed. Health Inform. 2019, 23, 1566–1573. [Google Scholar] [CrossRef]
  129. Pizzini, A.; Filipiak, W.; Wille, J.; Ager, C.; Wiesenhofer, H.; Kubinec, R.; Blaško, J.; Tschurtschenthaler, C.; A Mayhew, C.; Weiss, G.; et al. Analysis of volatile organic compounds in the breath of patients with stable or acute exacerbation of chronic obstructive pulmonary disease. J. Breath Res. 2018, 12, 036002. [Google Scholar] [CrossRef]
  130. Cheng, Y.T.; Lin, Y.F.; Chiang, K.H.; Tseng, V.S. Mining Sequential Risk Patterns from Large-Scale Clinical Databases for Early Assessment of Chronic Diseases: A Case Study on Chronic Obstructive Pulmonary Disease. IEEE J. Biomed. Health Inform. 2017, 21, 303–311. [Google Scholar] [CrossRef]
  131. Baechle, C.; Agarwal, A.; Behara, R.; Zhu, X. A cost sensitive approach to predicting 30-day hospital readmission in COPD patients. In Proceedings of the 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Orlando, FL, USA, 16–19 February 2017; pp. 317–320. [Google Scholar] [CrossRef]
  132. Wang, Y.; Zhang, T.; Wang, L.; Zhang, S.; Yang, C.; Wang, W.; Li, W.; Xu, H.; Wang, H.; Guo, F. Research on the Medication Rules of Ancient and Modern Prescriptions for the Treatment of Chronic Obstructive Pulmonary Disease Based on Data Mining. In Proceedings of the 2024 5th International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Wuhu, China, 8–10 November 2024; pp. 239–249. [Google Scholar] [CrossRef]
  133. Hasenstab, K.A.; Yuan, N.; Retson, T.; Conrad, D.J.; Kligerman, S.; Lynch, D.A.; Hsiao, A. Automated ct staging of chronic obstructive pulmonary disease severity for predicting disease progression and mortality with a deep learning convolutional neural network. Radiol. Cardiothorac. Imaging 2021, 3, e200477. [Google Scholar] [CrossRef] [PubMed]
  134. Hussain, A.; Choi, H.-E.; Kim, H.-J.; Aich, S.; Saqlain, M.; Kim, H.-C. Forecast the Exacerbation in Patients of Chronic Obstructive Pulmonary Disease with Clinical Indicators Using Machine Learning Techniques. Diagnostics 2021, 11, 829. [Google Scholar] [CrossRef]
  135. Peng, J.; Chen, C.; Zhou, M.; Xie, X.; Zhou, Y.; Luo, C.H. A Machine-learning Approach to Forecast Aggravation Risk in Patients with Acute Exacerbation of Chronic Obstructive Pulmonary Disease with Clinical Indicators. Sci. Rep. 2020, 10, 3118. [Google Scholar] [CrossRef] [PubMed]
  136. Altan, G.; Kutlu, Y.; Gökçen, A. Chronic obstructive pulmonary disease severity analysis using deep learning on multi-channel lung sounds. Turk. J. Electr. Eng. Comput. Sci. 2020, 28, 2979–2996. [Google Scholar] [CrossRef]
  137. Goto, T.; Jo, T.; Matsui, H.; Fushimi, K.; Hayashi, H.; Yasunaga, H. Machine Learning-Based Prediction Models for 30-Day Readmission after Hospitalization for Chronic Obstructive Pulmonary Disease. COPD J. Chronic Obstr. Pulm. Dis. 2019, 16, 338–343. [Google Scholar] [CrossRef] [PubMed]
  138. Orchard, P.; Agakova, A.; Pinnock, H.; Burton, C.D.; Sarran, C.; Agakov, F.; McKinstry, B. Improving prediction of risk of hospital admission in chronic obstructive pulmonary disease: Application of machine learning to telemonitoring data. J. Med. Internet. Res. 2018, 20, e263. [Google Scholar] [CrossRef]
  139. Swaminathan, S.; Qirko, K.; Smith, T.; Corcoran, E.; Wysham, N.G.; Bazaz, G.; Kappel, G.; Gerber, A.N. A machine learning approach to triaging patients with chronic obstructive pulmonary disease. PLoS ONE 2017, 12, e0188532. [Google Scholar] [CrossRef]
  140. Casal-Guisande, M.; Represas-Represas, C.; Golpe, R.; Fernández-García, A.; González-Montaos, A.; Comesaña-Campos, A.; Ruano-Raviña, A.; Fernández-Villar, A. Clinical and Social Characterization of Patients Hospitalized for COPD Exacerbation Using Machine Learning Tools. Arch. Bronconeumol. 2024, 61, 264–273. [Google Scholar] [CrossRef]
  141. López-Canay, J.; Casal-Guisande, M.; Pinheira, A.; Golpe, R.; Comesaña-Campos, A.; Fernández-García, A.; Represas-Represas, C.; Fernández-Villar, A. Predicting COPD Readmission: An Intelligent Clinical Decision Support System. Diagnostics 2025, 15, 318. [Google Scholar] [CrossRef]
  142. Jeon, E.T.; Park, H.; Lee, J.K.; Heo, E.Y.; Lee, C.H.; Kim, D.K.; Kim, D.H.; Lee, H.W. Deep Learning–Based Chronic Obstructive Pulmonary Disease Exacerbation Prediction Using Flow-Volume and Volume-Time Curve Imaging: Retrospective Cohort Study. J. Med. Internet Res. 2025, 27, e69785. [Google Scholar] [CrossRef]
  143. Alves Pegoraro, J.; Guerder, A.; Similowski, T.; Salamitou, P.; Gonzalez-Bermejo, J.; Birmelé, E. Detection of COPD exacerbations with continuous monitoring of breathing rate and inspiratory amplitude under oxygen therapy. BMC Med. Inform. Decis. Mak. 2025, 25, 101. [Google Scholar] [CrossRef]
  144. Atzeni, M.; Cappon, G.; Quint, J.K.; Kelly, F.; Barratt, B.; Vettoretti, M. A machine learning framework for short-term prediction of chronic obstructive pulmonary disease exacerbations using personal air quality monitors and lifestyle data. Sci. Rep. 2025, 15, 2385. [Google Scholar] [CrossRef]
  145. Wu, C.-T.; Wang, S.-M.; Su, Y.-E.; Hsieh, T.-T.; Chen, P.-C.; Cheng, Y.-C.; Tseng, T.-W.; Chang, W.-S.; Su, C.-S.; Kuo, L.-C.; et al. A Precision Health Service for Chronic Diseases: Development and Cohort Study Using Wearable Device, Machine Learning, and Deep Learning. IEEE J. Transl. Eng. Health Med. 2022, 10, 2700414. [Google Scholar] [CrossRef] [PubMed]
  146. Bhowmik, R.T.; Most, S.P. A Personalized Respiratory Disease Exacerbation Prediction Technique Based on a Novel Spatio-Temporal Machine Learning Architecture and Local Environmental Sensor Networks. Electronics 2022, 11, 2562. [Google Scholar] [CrossRef]
  147. Vishalatchi, K.; Thenmoezhi, N.; Shobica, K.; Rajan, S.; Sivagurunathan, P.T.; Udhayamoorthi, M. AI-Powered NLP Systems for Differential Diagnosis: Improving COPD Exacerbation Detection. In Proceedings of the 8th International Conference on Trends in Electronics and Informatics, ICOEI 2025, Tirunelveli, India, 24–26 April 2025; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2025; pp. 1287–1295. [Google Scholar] [CrossRef]
  148. Pinheira, A.; Casal-Guisande, M.; López-Canay, J.; Fernández-García, A.; Golpe, R.; Represas-Represas, C.; Torres-Durán, M.; Cerqueiro-Pequeño, J.; Comesaña-Campos, A.; Fernández-Villar, A. Image-Based Formalization of Tabular Data for Threshold-Based Prediction of Hospital Stay Using Convolutional Neural Networks: An Intelligent Decision Support System Applied in COPD. Appl. Syst. Innov. 2025, 8, 128. [Google Scholar] [CrossRef]
  149. Amado-Caballero, P.; Garmendia-Leiza, J.R.; Aguilar-García, M.D.; Fernández-Martínez-De-Septiem, C.; San-José-Revuelta, L.M.; García-Ruano, A.; Alberola-López, C.; Casaseca-De-La-Higuera, P. Audio Cough Analysis by Parametric Modelling of Weighted Spectrograms to Interpret the Output of Convolutional Neural Networks. In Proceedings of the 2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, USA, 15–19 July 2024; pp. 1–4. [Google Scholar] [CrossRef]
  150. Weikert, T.; Friebe, L.; Wilder-Smith, A.; Yang, S.; Sperl, J.I.; Neumann, D.; Balachandran, A.; Bremerich, J.; Sauter, A.W. Automated quantification of airway wall thickness on chest CT using retina U-Nets—Performance evaluation and application to a large cohort of chest CTs of COPD patients. Eur. J. Radiol. 2022, 155, 110460. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.