Artificial Intelligence Applications in Public Health: 2nd Edition

A special issue of Computation (ISSN 2079-3197).

Deadline for manuscript submissions: closed (31 October 2025) | Viewed by 23548

Special Issue Editors


E-Mail Website1 Website2
Guest Editor
1. Mathematical Modeling and Artificial Intelligence, National Aerospace University “Kharkiv Aviation Institute”, 61101 Kharkiv, Ukraine
2. Ubiquitous Health Technologies Lab, University of Waterloo, Waterloo, ON N2L 3G5, Canada
Interests: artificial intelligence; machine learning; epidemic model; infectious diseases simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Information Technology, Lodz University of Technology, 90-924 Lodz, Poland
Interests: mathematical modeling; optimization of complex systems; combinatorial optimization; packing and covering problems; computational intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We are pleased to announce a Special Issue entitled “Artificial Intelligence Applications in Public Health: 2nd Edition”. This Special Issue aims to gather research studies across various disciplines to shed light on the cutting-edge uses of computational techniques and artificial intelligence (AI) in the field of public health.

This Special Issue emphasizes AI’s transformative potential in managing and addressing critical challenges in public health, from disease surveillance, outbreak prediction, and health systems’ optimization, to personalized health interventions. The rapidly expanding capabilities of AI and computation make them increasingly indispensable in public health decision-making, enhancing both efficiency and effectiveness.

The articles collected in this Special Issue will cover a broad spectrum of topics, including, but not limited to, AI-enhanced predictive modeling for disease spread, big data analytics for health trend forecasting, machine learning for patient stratification, and deep learning for image-based diagnostics in public health settings. With this Special Issue, we aim to provide a comprehensive overview of the current state of the art in this field and to inspire innovative future research.

This Special Issue is a call to all researchers, data scientists, public health experts, and policymakers to submit their original research, reviews, case studies, and thought-provoking perspectives that demonstrate the novel uses and potentials of AI and computation in public health.

Dr. Dmytro Chumachenko
Prof. Dr. Sergiy Yakovlev
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computation is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • public health
  • computation
  • disease surveillance
  • predictive modeling
  • health systems optimization
  • public health informatics
  • data-driven medicine

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1272 KB  
Article
Impact of Scaling Classic Component on Performance of Hybrid Multi-Backbone Quantum–Classic Neural Networks for Medical Applications
by Arsenii Khmelnytskyi, Yuri Gordienko and Sergii Stirenko
Computation 2025, 13(12), 278; https://doi.org/10.3390/computation13120278 - 1 Dec 2025
Viewed by 221
Abstract
Purpose: While hybrid quantum–classical neural networks (HNNs) are a promising avenue for quantum advantage, the critical influence of the classical backbone architecture on their performance remains poorly understood. This study investigates the role of lightweight convolutional neural network architectures, focusing on LCNet, in [...] Read more.
Purpose: While hybrid quantum–classical neural networks (HNNs) are a promising avenue for quantum advantage, the critical influence of the classical backbone architecture on their performance remains poorly understood. This study investigates the role of lightweight convolutional neural network architectures, focusing on LCNet, in determining the stability, generalization, and effectiveness of hybrid models augmented with quantum layers for medical applications. The objective is to clarify the architectural compatibility between quantum and classical components and provide guidelines for backbone selection in hybrid designs. Methods: We constructed HNNs by integrating a four-qubit quantum circuit (with trainable rotations) into scaled versions of LCNet (050, 075, 100, 150, 200). These models were rigorously evaluated on CIFAR-10 and MedMNIST using stratified 5-fold cross-validation, assessing accuracy, AUC, and robustness metrics. Performance was assessed with accuracy, macro- and micro-averaged area under the ROC curve (AUC), per-class accuracy, and out-of-fold (OoF) predictions to ensure unbiased generalization. In addition, training dynamics, confusion matrices, and performance stability across folds were analyzed to capture both predictive accuracy and robustness. Results: The experiments revealed a strong dependence of hybrid network performance on both backbone architecture and model scale. Across all tests, LCNet-based hybrids achieved the most consistent benefits, particularly at compact and medium configurations. From LCNet050 to LCNet100, hybrid models maintained high macro-AUC values exceeding 0.95 and delivered higher mean accuracies with lower variance across folds, confirming enhanced stability and generalization through quantum integration. On the DermaMNIST dataset, these hybrids achieved accuracy gains of up to seven percentage points and improved AUC by more than three points, demonstrating their robustness in imbalanced medical settings. However, as backbone complexity increased (LCNet150 and LCNet200), the classical architectures regained superiority, indicating that the advantages of quantum layers diminish with scale. The mostconsistent gains were observed at smaller and medium LCNet scales, where hybridization improved accuracy and stability across folds. This divergence indicates that hybrid networks do not necessarily follow the “bigger is better” paradigm of classical deep learning. Per-class analysis further showed that hybrids improved recognition in challenging categories, narrowing the gap between easy and difficult classes. Conclusions: The study demonstrates that the performance and stability of hybrid quantum–classical neural networks are fundamentally determined by the characteristics of their classical backbones. Across extensive experiments on CIFAR-10 and DermaMNIST, LCNet-based hybrids consistently outperformed or matched their classical counterparts at smaller and medium scales, achieving higher accuracy and AUC along with notably reduced variability across folds. These improvements highlight the role of quantum layers as implicit regularizers that enhance learning stability and generalization—particularly in data-limited or imbalanced medical settings. However, the observed benefits diminished with increasing backbone complexity, as larger classical models regained superiority in both accuracy and convergence reliability. This indicates that hybrid architectures do not follow the conventional “larger-is-better” paradigm of classical deep learning. Overall, the results establish that architectural compatibility and model scale are decisive factors for effective quantum–classical integration. Lightweight backbones such as LCNet offer a robust foundation for realizing the advantages of hybridization in practical, resource-constrained medical applications, paving the way for future studies on scalable, hardware-efficient, and clinically reliable hybrid neural networks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

17 pages, 1112 KB  
Article
Management of Severe COVID-19 Diagnosis Using Machine Learning
by Larysa Sydorchuk, Maksym Sokolenko, Miroslav Škoda, Daniel Lajcin, Yaroslav Vyklyuk, Ruslan Sydorchuk, Alina Sokolenko and Dmytro Martjanov
Computation 2025, 13(10), 238; https://doi.org/10.3390/computation13100238 - 9 Oct 2025
Viewed by 567
Abstract
COVID-19 remains a global health challenge, with severe cases often leading to complications and fatalities. The objective of this study was to assess supervised machine learning algorithms for predicting severe COVID-19 based on demographic, clinical, biochemical, and genetic variables, with the aim of [...] Read more.
COVID-19 remains a global health challenge, with severe cases often leading to complications and fatalities. The objective of this study was to assess supervised machine learning algorithms for predicting severe COVID-19 based on demographic, clinical, biochemical, and genetic variables, with the aim of identifying the most informative prognostic markers. For Machine Learning (ML) analysis, we utilized a dataset comprising 226 observations with 68 clinical, biochemical, and genetic features collected from 226 patients with confirmed COVID-19 (54—moderate, 142—severe and 30 with mild disease). The target variable was disease severity (mild, moderate, severe). The feature set included demographic variables (age, sex), genetic markers (single-nucleotide polymorphisms (SNPs) in FGB (rs1800790), NOS3 (rs2070744), and TMPRSS2 (rs12329760)), biochemical indicators (IL-6, endothelin-1, D-dimer, fibrinogen, among others), and clinical parameters (blood pressure, body mass index, comorbidities). The target variable was disease severity. To identify the most effective predictive models for COVID-19 severity, we systematically evaluated multiple supervised learning algorithms, including logistic regression, k-nearest neighbors, decision trees, random forest, gradient boosting, bagging, naïve Bayes, and support vector machines. Model performance was assessed using accuracy and the area under the receiver operating characteristic curve (AUC-ROC). Among the predictors, IL-6, presence of depression/pneumonia, LDL cholesterol, AST, platelet count, lymphocyte count, and ALT showed the strongest correlations with severity. The highest predictive accuracy, with negligible error rates, was achieved by ensemble-based models such as ExtraTreesClassifier, HistGradientBoostingClassifier, BaggingClassifier, and GradientBoostingClassifier. Notably, decision tree models demonstrated high classification precision at terminal nodes, many of which yielded a 100% probability for a specific severity class. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

15 pages, 755 KB  
Article
Successful Management of Public Health Projects Driven by AI in a BANI Environment
by Sergiy Bushuyev, Natalia Bushuyeva, Ivan Nekrasov and Igor Chumachenko
Computation 2025, 13(7), 160; https://doi.org/10.3390/computation13070160 - 4 Jul 2025
Viewed by 1161
Abstract
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the [...] Read more.
The management of public health projects in a BANI (brittle, anxious, non-linear, incomprehensible) environment, exemplified by the ongoing war in Ukraine, presents unprecedented challenges due to fragile systems, heightened uncertainty, and complex socio-political dynamics. This study proposes an AI-driven framework to enhance the resilience and effectiveness of public health interventions under such conditions. By integrating a coupled SEIR–Infodemic–Panicdemic Model with war-specific factors, we simulate the interplay of infectious disease spread, misinformation dissemination, and panic dynamics over 1500 days in a Ukrainian city (Kharkiv). The model incorporates time-varying parameters to account for population displacement, healthcare disruptions, and periodic war events, reflecting the evolving conflict context. Sensitivity and risk–opportunity analyses reveal that disease transmission, misinformation, and infrastructure damage significantly exacerbate epidemic peaks, while AI-enabled interventions, such as fact-checking, mental health support, and infrastructure recovery, offer substantial mitigation potential. Qualitative assessments identify technical, organisational, ethical, regulatory, and military risks, alongside opportunities for predictive analytics, automation, and equitable healthcare access. Quantitative simulations demonstrate that risks, like increased displacement, can amplify infectious peaks by up to 28.3%, whereas opportunities, like enhanced fact-checking, can reduce misinformation by 18.2%. These findings provide a roadmap for leveraging AI to navigate BANI environments, offering actionable insights for public health practitioners in Ukraine and other crisis settings. The study underscores AI’s transformative role in fostering adaptive, data-driven strategies to achieve sustainable health outcomes amidst volatility and uncertainty. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

21 pages, 1681 KB  
Article
Scalable Clustering of Complex ECG Health Data: Big Data Clustering Analysis with UMAP and HDBSCAN
by Vladislav Kaverinskiy, Illya Chaikovsky, Anton Mnevets, Tatiana Ryzhenko, Mykhailo Bocharov and Kyrylo Malakhov
Computation 2025, 13(6), 144; https://doi.org/10.3390/computation13060144 - 10 Jun 2025
Cited by 2 | Viewed by 3248
Abstract
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). [...] Read more.
This study explores the potential of unsupervised machine learning algorithms to identify latent cardiac risk profiles by analyzing ECG-derived parameters from two general groups: clinically healthy individuals (Norm dataset, n = 14,863) and patients hospitalized with heart failure (patients’ dataset, n = 8220). Each dataset includes 153 ECG and heart rate variability (HRV) features, including both conventional and novel diagnostic parameters obtained using a Universal Scoring System. The study aims to apply unsupervised clustering algorithms to ECG data to detect latent risk profiles related to heart failure, based on distinctive ECG features. The focus is on identifying patterns that correlate with cardiac health risks, potentially aiding in early detection and personalized care. We applied a combination of Uniform Manifold Approximation and Projection (UMAP) for dimensionality reduction and Hierarchical Density-Based Spatial Clustering (HDBSCAN) for unsupervised clustering. Models trained on one dataset were applied to the other to explore structural differences and detect latent predispositions to cardiac disorders. Both Euclidean and Manhattan distance metrics were evaluated. Features such as the QRS angle in the frontal plane, Detrended Fluctuation Analysis (DFA), High-Frequency power (HF), and others were analyzed for their ability to distinguish different patient clusters. In the Norm dataset, Euclidean distance clustering identified two main clusters, with Cluster 0 indicating a lower risk of heart failure. Key discriminative features included the “ALPHA QRS ANGLE IN THE FRONTAL PLANE” and DFA. In the patients’ dataset, three clusters emerged, with Cluster 1 identified as potentially high-risk. Manhattan distance clustering provided additional insights, highlighting features like “ST DISLOCATION” and “T AMP NORMALIZED” as significant for distinguishing between clusters. The analysis revealed distinct clusters that correspond to varying levels of heart failure risk. In the Norm dataset, two main clusters were identified, with one associated with a lower risk profile. In the patients’ dataset, a three-cluster structure emerged, with one subgroup displaying markedly elevated risk indicators such as high-frequency power (HF) and altered QRS angle values. Cross-dataset clustering confirmed consistent feature shifts between groups. These findings demonstrate the feasibility of ECG-based unsupervised clustering for early risk stratification. The results offer a non-invasive tool for personalized cardiac monitoring and merit further clinical validation. These findings emphasize the potential for clustering techniques to contribute to early heart failure detection and personalized monitoring. Future research should aim to validate these results in other populations and integrate these methods into clinical decision-making frameworks. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

20 pages, 21534 KB  
Article
Smoothing Techniques for Improving COVID-19 Time Series Forecasting Across Countries
by Uliana Zbezhkhovska and Dmytro Chumachenko
Computation 2025, 13(6), 136; https://doi.org/10.3390/computation13060136 - 3 Jun 2025
Cited by 2 | Viewed by 2456
Abstract
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, [...] Read more.
Accurate forecasting of COVID-19 case numbers is critical for timely and effective public health interventions. However, epidemiological data’s irregular and noisy nature often undermines the predictive performance. This study examines the influence of four smoothing techniques—the rolling mean, the exponentially weighted moving average, a Kalman filter, and seasonal–trend decomposition using Loess (STL)—on the forecasting accuracy of four models: LSTM, the Temporal Fusion Transformer (TFT), XGBoost, and LightGBM. Weekly case data from Ukraine, Bulgaria, Slovenia, and Greece were used to assess the models’ performance over short- (3-month) and medium-term (6-month) horizons. The results demonstrate that smoothing enhanced the models’ stability, particularly for neural architectures, and the model selection emerged as the primary driver of predictive accuracy. The LSTM and TFT models, when paired with STL or the rolling mean, outperformed the others in their short-term forecasts, while XGBoost exhibited greater robustness over longer horizons in selected countries. An ANOVA confirmed the statistically significant influence of the model type on the MAPE (p = 0.008), whereas the smoothing method alone showed no significant effect. These findings offer practical guidance for designing context-specific forecasting pipelines adapted to epidemic dynamics and variations in data quality. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

15 pages, 1375 KB  
Article
How Re-Infections and Newborns Can Impact Visible and Hidden Epidemic Dynamics?
by Igor Nesteruk
Computation 2025, 13(5), 113; https://doi.org/10.3390/computation13050113 - 9 May 2025
Viewed by 612
Abstract
Mathematical modeling allows taking into account registered and hidden infections to make correct predictions of epidemic dynamics and develop recommendations that can reduce the negative impact on public health and the economy. A model for visible and hidden epidemic dynamics (published by the [...] Read more.
Mathematical modeling allows taking into account registered and hidden infections to make correct predictions of epidemic dynamics and develop recommendations that can reduce the negative impact on public health and the economy. A model for visible and hidden epidemic dynamics (published by the author in February 2025) has been generalized to account for the effects of re-infection and newborns. An analysis of the equilibrium points, examples of numerical solutions, and comparisons with the dynamics of real epidemics are provided. A stable quasi-equilibrium for the particular case of almost completely hidden epidemics was also revealed. Numerical results and comparisons with the COVID-19 epidemic dynamics in Austria and South Korea showed that re-infections, newborns, and hidden cases make epidemics endless. Newborns can cause repeated epidemic waves even without re-infections. In particular, the next epidemic peak of pertussis in England is expected to occur in 2031. With the use of effective algorithms for parameter identification, the proposed approach can ensure effective predictions of visible and hidden numbers of cases and infectious and removed patients. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

17 pages, 1513 KB  
Article
Cascade-Based Input-Doubling Classifier for Predicting Survival in Allogeneic Bone Marrow Transplants: Small Data Case
by Ivan Izonin, Roman Tkachenko, Nazarii Hovdysh, Oleh Berezsky, Kyrylo Yemets and Ivan Tsmots
Computation 2025, 13(4), 80; https://doi.org/10.3390/computation13040080 - 21 Mar 2025
Cited by 1 | Viewed by 889
Abstract
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing [...] Read more.
In the field of transplantology, where medical decisions are heavily dependent on complex data analysis, the challenge of small data has become increasingly prominent. Transplantology, which focuses on the transplantation of organs and tissues, requires exceptional accuracy and precision in predicting outcomes, assessing risks, and tailoring treatment plans. However, the inherent limitations of small datasets present significant obstacles. This paper introduces an advanced input-doubling classifier designed to improve survival predictions for allogeneic bone marrow transplants. The approach utilizes two artificial intelligence tools: the first Probabilistic Neural Network generates output signals that expand the independent attributes of an augmented dataset, while the second machine learning algorithm performs the final classification. This method, based on the cascading principle, facilitates the development of novel algorithms for preparing and applying the enhanced input-doubling technique to classification tasks. The proposed method was tested on a small dataset within transplantology, focusing on binary classification. Optimal parameters for the method were identified using the Dual Annealing algorithm. Comparative analysis of the improved method against several existing approaches revealed a substantial improvement in accuracy across various performance metrics, underscoring its practical benefits Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

23 pages, 466 KB  
Article
COVID-19 Data Analysis: The Impact of Missing Data Imputation on Supervised Learning Model Performance
by Jorge Daniel Mello-Román and Adrián Martínez-Amarilla
Computation 2025, 13(3), 70; https://doi.org/10.3390/computation13030070 - 8 Mar 2025
Cited by 1 | Viewed by 4012
Abstract
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees [...] Read more.
The global COVID-19 pandemic has generated extensive datasets, providing opportunities to apply machine learning for diagnostic purposes. This study evaluates the performance of five supervised learning models—Random Forests (RFs), Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Logistic Regression (LR), and Decision Trees (DTs)—on a hospital-based dataset from the Concepción Department in Paraguay. To address missing data, four imputation methods (Predictive Mean Matching via MICE, RF-based imputation, K-Nearest Neighbor, and XGBoost-based imputation) were tested. Model performance was compared using metrics such as accuracy, AUC, F1-score, and MCC across five levels of missingness. Overall, RF consistently achieved high accuracy and AUC at the highest missingness level, underscoring its robustness. In contrast, SVM often exhibited a trade-off between specificity and sensitivity. ANN and DT showed moderate resilience, yet were more prone to performance shifts under certain imputation approaches. These findings highlight RF’s adaptability to different imputation strategies, as well as the importance of selecting methods that minimize sensitivity–specificity trade-offs. By comparing multiple imputation techniques and supervised models, this study provides practical insights for handling missing medical data in resource-constrained settings and underscores the value of robust ensemble methods for reliable COVID-19 diagnostics. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

18 pages, 2813 KB  
Article
Multimodal Data Fusion for Depression Detection Approach
by Mariia Nykoniuk, Oleh Basystiuk, Nataliya Shakhovska and Nataliia Melnykova
Computation 2025, 13(1), 9; https://doi.org/10.3390/computation13010009 - 2 Jan 2025
Cited by 15 | Viewed by 9013
Abstract
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients [...] Read more.
Depression is one of the most common mental health disorders in the world, affecting millions of people. Early detection of depression is crucial for effective medical intervention. Multimodal networks can greatly assist in the detection of depression, especially in situations where in patients are not always aware of or able to express their symptoms. By analyzing text and audio data, such networks are able to automatically identify patterns in speech and behavior that indicate a depressive state. In this study, we propose two multimodal information fusion networks: early and late fusion. These networks were developed using convolutional neural network (CNN) layers to learn local patterns, a bidirectional LSTM (Bi-LSTM) to process sequences, and a self-attention mechanism to improve focus on key parts of the data. The DAIC-WOZ and EDAIC-WOZ datasets were used for the experiments. The experiments compared the precision, recall, f1-score, and accuracy metrics for the cases of using early and late multimodal data fusion and found that the early information fusion multimodal network achieved higher classification accuracy results. On the test dataset, this network achieved an f1-score of 0.79 and an overall classification accuracy of 0.86, indicating its effectiveness in detecting depression. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health: 2nd Edition)
Show Figures

Figure 1

Back to TopTop