Artificial Intelligence Applications in Public Health: 2nd Edition

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
1. Mathematical Modeling and Artificial Intelligence, National Aerospace University “Kharkiv Aviation Institute”, 61101 Kharkiv, Ukraine
2. Ubiquitous Health Technologies Lab, University of Waterloo, Waterloo, ON N2L 3G5, Canada
3. Balsillie School of International Affairs, Waterloo, ON N2L 6C2, Canada
Interests: artificial intelligence; machine learning; epidemic model; infectious diseases simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Information Technology, Lodz University of Technology, 90-924 Lodz, Poland
Interests: mathematical modeling; optimization of complex systems; combinatorial optimization; packing and covering problems; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue entitled “Artificial Intelligence Applications in Public Health: 2nd Edition”. This Special Issue aims to gather research studies across various disciplines to shed light on the cutting-edge uses of computational techniques and artificial intelligence (AI) in the field of public health.

This Special Issue emphasizes AI’s transformative potential in managing and addressing critical challenges in public health, from disease surveillance, outbreak prediction, and health systems’ optimization, to personalized health interventions. The rapidly expanding capabilities of AI and computation make them increasingly indispensable in public health decision-making, enhancing both efficiency and effectiveness.

The articles collected in this Special Issue will cover a broad spectrum of topics, including, but not limited to, AI-enhanced predictive modeling for disease spread, big data analytics for health trend forecasting, machine learning for patient stratification, and deep learning for image-based diagnostics in public health settings. With this Special Issue, we aim to provide a comprehensive overview of the current state of the art in this field and to inspire innovative future research.

This Special Issue is a call to all researchers, data scientists, public health experts, and policymakers to submit their original research, reviews, case studies, and thought-provoking perspectives that demonstrate the novel uses and potentials of AI and computation in public health.

Dr. Dmytro Chumachenko
Prof. Dr. Sergiy Yakovlev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • public health
  • computation
  • disease surveillance
  • predictive modeling
  • health systems optimization
  • public health informatics
  • data-driven medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1681 KiB  
Article
Scalable Clustering of Complex ECG Health Data: Big Data Clustering Analysis with UMAP and HDBSCAN
by Vladislav Kaverinskiy, Illya Chaikovsky, Anton Mnevets, Tatiana Ryzhenko, Mykhailo Bocharov and Kyrylo Malakhov
Computation 2025, 13(6), 144; https://doi.org/10.3390/computation13060144 - 10 Jun 2025
Abstract
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). [...] Read more.
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). Each dataset includes 153 ECG and heart rate variability (HRV) features, including both conventional and novel diagnostic parameters obtained using a Universal Scoring System. The study aims to apply unsupervised clustering algorithms to ECG data to detect latent risk profiles related to heart failure, based on distinctive ECG features. The focus is on identifying patterns that correlate with cardiac health risks, potentially aiding in early detection and personalized care. We applied a combination of Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction and Hierarchical Density-Based Spatial Clustering (HDBSCAN) for unsupervised clustering. Models trained on one dataset were applied to the other to explore structural differences and detect latent predispositions to cardiac disorders. Both Euclidean and Manhattan distance metrics were evaluated. Features such as the QRS angle in the frontal plane, Detrended Fluctuation Analysis (DFA), High-Frequency power (HF), and others were analyzed for their ability to distinguish different patient clusters. In the Norm dataset, Euclidean distance clustering identified two main clusters, with Cluster 0 indicating a lower risk of heart failure. Key discriminative features included the “ALPHA QRS ANGLE IN THE FRONTAL PLANE” and DFA. In the patients’ dataset, three clusters emerged, with Cluster 1 identified as potentially high-risk. Manhattan distance clustering provided additional insights, highlighting features like “ST DISLOCATION” and “T AMP NORMALIZED” as significant for distinguishing between clusters. The analysis revealed distinct clusters that correspond to varying levels of heart failure risk. In the Norm dataset, two main clusters were identified, with one associated with a lower risk profile. In the patients’ dataset, a three-cluster structure emerged, with one subgroup displaying markedly elevated risk indicators such as high-frequency power (HF) and altered QRS angle values. Cross-dataset clustering confirmed consistent feature shifts between groups. These findings demonstrate the feasibility of ECG-based unsupervised clustering for early risk stratification. The results offer a non-invasive tool for personalized cardiac monitoring and merit further clinical validation. These findings emphasize the potential for clustering techniques to contribute to early heart failure detection and personalized monitoring. Future research should aim to validate these results in other populations and integrate these methods into clinical decision-making frameworks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

20 pages, 21534 KiB  
Article
Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries
by Uliana Zbezhkhovska and Dmytro Chumachenko
Computation 2025, 13(6), 136; https://doi.org/10.3390/computation13060136 - 3 Jun 2025
Viewed by 256
Abstract
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, [...] Read more.
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, a Kalman filter, and seasonal–trend decomposition using Loess (STL)—on the forecasting accuracy of four models: LSTM, the Temporal Fusion Transformer (TFT), XGBoost, and LightGBM. Weekly case data from Ukraine, Bulgaria, Slovenia, and Greece were used to assess the models’ performance over short- (3-month) and medium-term (6-month) horizons. The results demonstrate that smoothing enhanced the models’ stability, particularly for neural architectures, and the model selection emerged as the primary driver of predictive accuracy. The LSTM and TFT models, when paired with STL or the rolling mean, outperformed the others in their short-term forecasts, while XGBoost exhibited greater robustness over longer horizons in selected countries. An ANOVA confirmed the statistically significant influence of the model type on the MAPE (p = 0.008), whereas the smoothing method alone showed no significant effect. These findings offer practical guidance for designing context-specific forecasting pipelines adapted to epidemic dynamics and variations in data quality. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

15 pages, 1375 KiB  
Article
How Re-Infections and Newborns Can Impact Visible and Hidden Epidemic Dynamics?
by Igor Nesteruk
Computation 2025, 13(5), 113; https://doi.org/10.3390/computation13050113 - 9 May 2025
Viewed by 168
Abstract
Mathematical modeling allows taking into account registered and hidden infections to make correct predictions of epidemic dynamics and develop recommendations that can reduce the negative impact on public health and the economy. A model for visible and hidden epidemic dynamics (published by the [...] Read more.
Mathematical modeling allows taking into account registered and hidden infections to make correct predictions of epidemic dynamics and develop recommendations that can reduce the negative impact on public health and the economy. A model for visible and hidden epidemic dynamics (published by the author in February 2025) has been generalized to account for the effects of re-infection and newborns. An analysis of the equilibrium points, examples of numerical solutions, and comparisons with the dynamics of real epidemics are provided. A stable quasi-equilibrium for the particular case of almost completely hidden epidemics was also revealed. Numerical results and comparisons with the COVID-19 epidemic dynamics in Austria and South Korea showed that re-infections, newborns, and hidden cases make epidemics endless. Newborns can cause repeated epidemic waves even without re-infections. In particular, the next epidemic peak of pertussis in England is expected to occur in 2031. With the use of effective algorithms for parameter identification, the proposed approach can ensure effective predictions of visible and hidden numbers of cases and infectious and removed patients. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

17 pages, 1513 KiB  
Article
Cascade-Based Input-Doubling Classifier for Predicting Survival in Allogeneic Bone Marrow Transplants: Small Data Case
by Ivan Izonin, Roman Tkachenko, Nazarii Hovdysh, Oleh Berezsky, Kyrylo Yemets and Ivan Tsmots
Computation 2025, 13(4), 80; https://doi.org/10.3390/computation13040080 - 21 Mar 2025
Viewed by 391
Abstract
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing [...] Read more.
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing risks, and tailoring treatment plans. However, the inherent limitations of small datasets present significant obstacles. This paper introduces an advanced input-doubling classifier designed to improve survival predictions for allogeneic bone marrow transplants. The approach utilizes two artificial intelligence tools: the first Probabilistic Neural Network generates output signals that expand the independent attributes of an augmented dataset, while the second machine learning algorithm performs the final classification. This method, based on the cascading principle, facilitates the development of novel algorithms for preparing and applying the enhanced input-doubling technique to classification tasks. The proposed method was tested on a small dataset within transplantology, focusing on binary classification. Optimal parameters for the method were identified using the Dual Annealing algorithm. Comparative analysis of the improved method against several existing approaches revealed a substantial improvement in accuracy across various performance metrics, underscoring its practical benefits Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

23 pages, 466 KiB  
Article
COVID-19 Data Analysis: The Impact of Missing Data Imputation on Supervised Learning Model Performance
by Jorge Daniel Mello-Román and Adrián Martínez-Amarilla
Computation 2025, 13(3), 70; https://doi.org/10.3390/computation13030070 - 8 Mar 2025
Viewed by 2166
Abstract
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees [...] Read more.
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees (DTs)—on a hospital-based dataset from the Concepción Department in Paraguay. To address missing data, four imputation methods (Predictive Mean Matching via MICE, RF-based imputation, K-Nearest Neighbor, and XGBoost-based imputation) were tested. Model performance was compared using metrics such as accuracy, AUC, F1-score, and MCC across five levels of missingness. Overall, RF consistently achieved high accuracy and AUC at the highest missingness level, underscoring its robustness. In contrast, SVM often exhibited a trade-off between specificity and sensitivity. ANN and DT showed moderate resilience, yet were more prone to performance shifts under certain imputation approaches. These findings highlight RF’s adaptability to different imputation strategies, as well as the importance of selecting methods that minimize sensitivity–specificity trade-offs. By comparing multiple imputation techniques and supervised models, this study provides practical insights for handling missing medical data in resource-constrained settings and underscores the value of robust ensemble methods for reliable COVID-19 diagnostics. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

18 pages, 2813 KiB  
Article
Multimodal Data Fusion for Depression Detection Approach
by Mariia Nykoniuk, Oleh Basystiuk, Nataliya Shakhovska and Nataliia Melnykova
Computation 2025, 13(1), 9; https://doi.org/10.3390/computation13010009 - 2 Jan 2025
Cited by 3 | Viewed by 2760
Abstract
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients [...] Read more.
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients are not always aware of or able to express their symptoms. By analyzing text and audio data, such networks are able to automatically identify patterns in speech and behavior that indicate a depressive state. In this study, we propose two multimodal information fusion networks: early and late fusion. These networks were developed using convolutional neural network (CNN) layers to learn local patterns, a bidirectional LSTM (Bi-LSTM) to process sequences, and a self-attention mechanism to improve focus on key parts of the data. The DAIC-WOZ and EDAIC-WOZ datasets were used for the experiments. The experiments compared the precision, recall, f1-score, and accuracy metrics for the cases of using early and late multimodal data fusion and found that the early information fusion multimodal network achieved higher classification accuracy results. On the test dataset, this network achieved an f1-score of 0.79 and an overall classification accuracy of 0.86, indicating its effectiveness in detecting depression. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

Back to TopTop