1. Introduction
Infectious disease modeling dates back to the early 20th century, when the first SIR (Susceptible-Infected-Recovered) model was proposed [
1]. Over the years, infectious disease modeling has become increasingly sophisticated, allowing researchers to create more realistic and data-driven models. The onset of the SARS-CoV-2 pandemic (COVID-19) in 2020 caused a surge in both the development and use of infectious disease models, driven by the desire to answer pressing questions from researchers and policymakers [
2]. The models developed during this time were created by modelers across disciplines with different objectives, resulting in the emergence of a variety of epidemiological models, each with their own strengths, weaknesses, and specifications. While this variety advanced the field of epidemiological modeling through the creation of novel approaches, frameworks, and data ingestion pipelines, it also highlighted the importance of vetting model quality and using models appropriately. As researchers and policymakers turned to these newly developed models to understand the health risks posed by SARS-CoV-2, not all models were used appropriately with full consideration of their intended purpose or capabilities [
2]. As a result, the use of modeling to inform highly consequential policies—such as travel restrictions, testing protocols, vaccine rollouts, and public closures—yielded mixed results [
3].
Without guidance to indicate which modeling tool is most appropriate for an organization’s needs, model selection may be limited to factors such as access or institutional affiliation rather than the model’s intended use, capabilities, or appropriateness. During the SARS-CoV-2 pandemic, models developed by well-known institutions were often used when advising policy, even when, based on their intended use, they may not have been the most appropriate tool [
4]. In some cases, this led to decision making informed by general models that failed to account for specific intricacies related to a state or region [
5,
6]. The reliance of policy makers on models that were not ideally suited to answer their questions produced conflicting results, resulting in decreased trust in public health organizations [
4,
7].
Previous efforts have sufficiently reviewed the technical aspects of modeling tools based on specific capabilities (i.e., data visualization) [
8] or their structure (agent-based, compartmental, stochastic, etc.) [
9,
10,
11,
12] without addressing the broader range of characteristics that might influence model selection or providing guidance regarding the selection of a model [
13]. Here, we provide a “how-to” framework to be used by researchers and policymakers when selecting epidemiological models for future outbreaks. We then implement the decision-making guidance to assess 43 existing models across key criteria that should be considered when selecting an infectious disease model. Using this framework and our initial assessment, we hope to enable researchers and policymakers to make informed and appropriate model selections based on the user’s needs and each model’s capabilities.
2. Methods
2.1. Development of the Decision Framework
The decision framework described for selecting an appropriate epidemiological model is adapted from the engineering design process [
14]. The first steps of the process entail the identification of the specific problem at hand and the formation of criteria and constraints. Criteria included in
Table 1 were based on the authors’ collective prior experience selecting, building, and using epidemiological models. A rational decision-making process is then suggested to evaluate models against the established criteria to minimize bias. The results of the evaluation should then be used to identify one or several models for further testing or use.
2.2. Identification of Models for Assessment
Infectious disease modeling tools were identified using a search of the academic literature and conference abstracts, reference mining of review papers, additional web searches, and a review of COVID-19 models catalogued by the Centers for Disease Control and Prevention’s (CDC) Center for Forecasting and Outbreak Analytics (CFA) [
15] and Epidemic Prediction Initiative (EPI) [
16].
Literature searches were conducted using the Google Scholar and PubMed databases with search terms such as “modeling,” “predictive,” or “forecast,” combined with terms like “epidemiology” or “disease.” To capture any models that were in use during the COVID-19 pandemic that were not identified via our literature review, we turned to the CFA and EPI catalogs of infectious disease models and additional web searches. Full texts were retrieved for those models that appeared to meet our inclusion criteria (see
Supplementary Table S1). The references of the applicable literature were mined to find additional relevant studies.
We limited our assessment to models published in the academic literature, conference proceedings, or those described elsewhere between 2015 and 2023. This example was further constrained to include only those tools which modeled multiple agents, could be readily adapted to other agents, or allowed user-defined disease parameters; allowed for variation in spatial scale; could incorporate pharmaceutical and non-pharmaceutical interventions and behavioral modifications; or could support different outputs and visualizations. With the exception of EpiGrid [
17], all models identified during the literature and publication review that met the initial criteria were included in the evaluation.
2.3. Evaluation of Models
The models included in our evaluation were assigned an ordinal ranking for each of the characteristics listed in
Table 2. Models with more features or a more robust suite of choices scored high in the modeling flexibility categories, models that contained only some of the described functionality or offered reduced choices to the user were ranked as medium, and models that did not contain the described functionality or presented limited to no choice to the user were ranked as low. The visualization and mitigation criteria were similarly ranked, with greater functionality and user choice achieving a high ranking, some functionality or a reduction in user choice receiving a medium ranking, and lack of functionality receiving a low ranking.
3. Creation of a Decision Framework
In our framework, the first step in the selection of an appropriate epidemiological model is the creation of criteria specific to the use case. The criteria can include a wide array of factors, ranging from model accuracy under particular circumstances to the level of expertise required to operate the software.
Table 1 illustrates example criteria that individuals might consider when developing a decision framework. It is neither exhaustive nor prescriptive. In practice, a small set of carefully chosen criteria is often sufficient to support a well-informed decision.
The process of selecting and prioritizing criteria is critical for identifying the best available options based on a user’s needs. While we believe an explicit ranking is often unnecessary, distinguishing between essential criteria and “nice-to-haves” can significantly streamline decision making. Clarifying which constraints are non-negotiable can help accelerate the process by establishing firm rule out conditions that quickly eliminate unsuitable options.
Once a decision framework has been established, specific models can be evaluated against the prioritized criteria. This begins with identifying clear metrics for each criterion, ideally using objective or quantifiable measures. Objective metrics—such as the presence or absence of features—can often be assessed with simple yes/no questions. Subjective measures, such as the level of expertise required for operation or interface usability may also be included, but care should be taken not to overweight these factors in the final decision-making process.
After evaluating all models against the pre-selected criteria and eliminating those that fail to meet key requirements, the user can choose one or more models that either excel in a specific area or demonstrate high performance across multiple high-priority criteria. When evaluating models, it is important to remember that each model should be evaluated according to the criteria and metrics for the defined use case and different models will perform better for different applications. If modeling objectives change, criteria must be reselected, and models must be reevaluated according to the newly established metrics to make the optimal choice.
Notably, our decision-making framework omits the assessment of model accuracy, precision, and efficacy to instead focus exclusively on the suitability of the model. Once users have identified an appropriate model for their specific circumstances, further technical assessment of the model is warranted to ensure the model is scientifically valid—this assessment is beyond the scope of this paper.
4. Demonstration of the Decision Framework for Selected Models
To demonstrate how a model selection framework can be used, we selected criteria for evaluating modeling tools that would be broadly applicable for forward-looking epidemiological modeling exercises intended to inform policy, such as those used during the COVID-19 pandemic. The EpiGrid model [
17] has been widely used at the U.S. federal level for this purpose [
18,
19,
20]; therefore, we sought to identify complementary solutions that could enhance current capabilities or address potential gaps. To this end, we selected three main categories upon which to base our assessment—flexibility, visualizations, and mitigations.
The flexibility category was selected because forward-looking, preemptive modeling tasks with large uncertainties about transmission dynamics require the versatility to handle a variety of emerging infectious diseases that may arise. Models capable of simulating multiple agents across a variety of geographic regions and time scales are well suited to these modeling tasks. The second category, visualizations, was selected because the creation of clear, interpretable figures that convey meaning to both technical and non-technical personnel is critical to the translation of research to policy. While there are several methods for developing visualizations—some integrated into the modeling software, others external—integrated tools offer ease of use, a reduction in troubleshooting, and they minimize the risk of misinterpretation. Finally, the inclusion of mitigations was selected as the third category for this example framework. Mitigations span a broad set of pharmaceutical and non-pharmaceutical countermeasures, and the impact of these interventions are of primary interest to policymakers taking action to reduce caseloads, minimize disease burden, and stop disease spread [
21].
Though these three criteria were selected based on our specific scenario’s needs, other categories may be more useful under different circumstances. For example, researchers interested in intra-country spread may demand high geographic resolution models with subpopulations [
22,
23,
24,
25,
26,
27,
28,
29,
30,
31,
32] while influenza vaccine developers may require disease-specific, long-term seasonal models for market size and revenue forecasts [
27,
29,
31,
33,
34,
35,
36,
37,
38].
The models that were evaluated were identified through literature searches, reference mining reviews, or less formal search strategies. In our analysis, we included 43 models published in the academic literature, conference proceedings, or those described elsewhere between 2015 and 2023.
Supplementary Table S1 outlines the basic information on each model (including the agents that can be modeled, the modeling type, and notes for the evaluation criteria). Given the needs of our modeling scenario, we only included tools capable of modeling multiple agents, allowing for variation in spatial scale, incorporating mitigations, and supporting visualization—lack of any of these essential characteristics were treated as automatic disqualifiers. Additional information on model selection is available in the methods section.
Table 2 outlines the full set of model features and the specific evaluation metrics used to assess each model.
4.1. Flexibility
For the purposes of our evaluation framework, flexibility within any model is the ability to adapt to different needs specified by the individual user. Models were scored within each criterion, independent of other capabilities the model possessed. Models that scored highly in categories associated with flexibility represent those that have significant depth or breadth of use compared to others.
4.1.1. Agent/Pathogen Selection
Flexibility in agent/pathogen selection allows a single model to be applied in a range of scenarios. Several of the models we evaluated have the capability to model any agent or pathogen, while others included only respiratory agents or vector-borne agents.
The evaluated models with the greatest flexibility for agent/pathogen selection, along with the agents they are capable of modeling, are listed in
Table 3. Most models that excel in this category have the capacity to model any disease or agent. These models achieve maximum agent flexibility by allowing the user to specify the statistical parameters that the model uses to define a disease. While these models often have pre-programmed parameters for specific pathogens, user-derived specifications are useful in the application of modeling tools for previously uncharacterized diseases and strains. The ability to model both well-characterized and novel pathogens is particularly valuable in assessing tools used to prepare for future outbreaks.
4.1.2. User-Defined Parameters
Models accept a range of user-specified data, with more focused models often including hardcoded disease parameters while more flexible models allow parameters to be modified by the user. Compartmental models and models with underlying compartmental frameworks often allow users to specify rates of movement between compartments, with rates for moving between susceptible, exposed, infected, and recovered compartments each adjusted separately. Models that require user-specified parameters require more research prior to modeling, whereas models that have parameters built into the model can be quickly implemented so long as the parameters have been sufficiently justified by the model developers (
Supplementary Table S1). While predefined models require less input initially, they are only useful for the specific circumstances for which they were designed. In this sample evaluation, we highly rated models with the capability to modify and tune model parameters easily.
Table 4 lists five of the models we evaluated that scored highly in the user-defined parameters category.
One additional consideration related to user-defined parameters is the availability and accessibility of source code. Open-source models are the most flexible, as users can not only change parameters, but also change functions and relationships that may be fixed in many other models. Though these models offer enormous potential for customization, they can require extensive subject matter expertise and coding experience to effectively manipulate.
4.1.3. Geographic Capabilities
The infection rates and natural history of disease may vary according to geographical features or due to differences between populations living in specific regions. Factors such as rainfall, temperature, and proximity to livestock can all change the spread of disease between different populations. A model with the capabilities to account for these differences represents a useful tool when investigating disease spread across various geographical areas.
Models that can adapt to any spatial scale lend themselves to research questions that compare the impacts of large population disease spread on smaller communities within the population; although expanding geographical range typically reduces granularity and can increase both computational load and uncertainty. Likewise, modeling efforts that compare infection transmission and outbreaks in different populations defined by variations in area-level characteristics offer additional insights into the explanatory factors associated with disease transmission. Three of the models we evaluated that were particularly adept at being adapted to a wide range of geographic locations can be found in
Table 5.
4.2. Visualization Capabilities
A model’s visualization capabilities are an integral component of data interpretation and allow the user to better understand and present the modeling results. Given the relative abundance of external visualization tools, the majority of models investigated produce data as numerical values in CSV or similar files. These computer-readable files are ideal inputs for secondary visualization software but may require additional expertise and time to develop into full visualizations. Furthermore, models that directly display data within their interface have the additional benefit of allowing users to understand their results within a controlled platform. Researchers that want to rapidly adjust parameters and visualize changes will likely find success with models that display data directly within their interface. In addition to simply having the ability to generate figures, the types of visualizations that the model can output should be considered when evaluating the model’s ability to convey results: geographical displays of data such as maps answer different questions than charts that align cases with resource availability. Of the models we analyzed, only the two models described in
Table 6 demonstrated strong capabilities in generating useful visualizations.
4.3. Mitigations
When modeling the spread of infectious disease, it is important to consider both pharmaceutical and non-pharmaceutical interventions [
44,
45]. Pharmaceutical interventions, such as vaccines, antibiotics, and antivirals, can directly impact disease transmission and severity by modifying the disease of individuals [
46]. Non-pharmaceutical interventions, such as social distancing, mask wearing, and hand hygiene, also play a role in slowing the spread of disease, exerting their effects by acting on disease transmission between individuals [
21]. Incorporating these strategies into a model can provide more accurate and realistic projections of an outbreak’s progression and allow for more informed and accurate decision making. Several of the models we evaluated ranked highly when considering the implementation of pharmaceutical and non-pharmaceutical mitigations.
Table 7 provides an overview of these tools.
5. Discussion
Modeling can be used for a variety of purposes, including driving policy decisions, assessing preventative measures and mitigations, forecasting logistical and supply needs, and estimating expected illnesses and deaths from an outbreak. As evidenced during the COVID-19 pandemic, shortcomings in the applications of models hindered public opinion on the reliability and usefulness of infectious disease modeling, leading to mistrust in public health organizations [
4]. Given the gap in guidance on the fundamental problem of epidemiological model selection, we described how a structured, customizable framework can be implemented to select an appropriate model based on the research or decision-making objective. To illustrate the value of a structured framework for selecting an appropriate infectious disease model, we applied the framework to evaluate 43 models for their suitability in forward-looking epidemiological modeling. Each model’s strengths and limitations were assessed, and our results show that only a small subset of models emerged as top candidates for modeling the scenario we analyzed, helping to narrow the field for model selection. The full analysis is provided (
Supplementary Table S1) to help researchers and end users build on this work—identifying additional requirements and constraints specific to their use case to select the most appropriate model.
The goal of this framework is not to offer a prescriptive solution for the selection of a single ideal model, but rather to push modelers to weigh contrasting needs and priorities and introduce objectivity when evaluating options. In our demonstrated use case, we identified several models that ranked highly across multiple categories, for example EMOD and STEM—and to a lesser extent, DICE, GLEAMViz, and IHME. While this approach is not intended to identify the single best model, it is useful in reducing the number of models under consideration, in this case narrowing model options from 43 to 5. By carefully constructing evaluation criteria and metrics, model users can avoid misapplying models—for instance by using incorrect time scales, incorrectly sized populations, or by attempting to model diseases beyond the model’s capabilities. While this paper focuses on the importance of using models only within their intended scope, it is important to recognize that even appropriately selected models can produce inaccurate results if the underlying parameters are not scientifically valid or are based on an incomplete understanding of a disease. While evaluation of a model’s accuracy is beyond the scope of this paper, prior to the implementation of a model, a systematic assessment of model performance should be conducted.
Even well-suited models may have limitations that prevent them from being effectively implemented. One major consideration is data availability and reliability, including both the distribution and quality of the data. Data constraints include the need for frequent updates to case counts, mobility patterns, and healthcare capacity, as well as limitations in standardized case definitions, timely data sharing, and the availability of granular data in resource-limited settings. The timeliness and reliability of this information is essential for accurate modeling but is often hindered by relying on surveillance data.
In cases where data is limited, it is imperative that uncertainty is both understood and explicitly conveyed with results. For policymakers, this uncertainty and can be identified by asking practical questions: How confident can we be in the projected case numbers? How likely is it that observed trends reflect real transmission dynamics versus artifacts of incomplete reporting? For example, during the COVID-19 pandemic, inconsistencies in surveillance and underreporting in some countries led to biased inputs, which in turn produced forecasts that underestimated transmission intensity and delayed recognition of surge risks.
Predicting future events is inherently uncertain and when input data are sparse or biased, ensuring that imprecision is clearly reflected in the outputs is crucial to translation to policy [
48]. Forecasts may therefore need to present confidence intervals or scenario ranges rather than single point estimates which allow decision makers to prepare for a spectrum of possible outcomes rather than assuming one “correct” trajectory. Other strategies to address uncertainty include the concurrent use of multiple models (or sets of modeling parameters) to generate ensembles to get a better understanding of likely trends versus model artifacts. During COVID-19, ensemble approaches produced more accurate forecasts of deaths than any individual model included in the ensemble [
49]. The integration of multiple modeling strategies is beneficial because no single model can fully capture the complexity of an emerging outbreak, especially when data are sparse or evolving. Individual models may be overly sensitive to specific assumptions, such as transmission rates, intervention effectiveness, or reporting delays. By combining outputs across diverse models, ensembles tend to “average out” extreme predictions, reduce the influence of model-specific biases, and provide more stable and reliable forecasts. By utilizing ensembles or framing uncertainty in terms of risks and ranges, rather than exact numbers, officials are better positioned to weigh policy options under imperfect information.
6. Conclusions
As we collectively move forward from the COVID-19 pandemic and prepare for future infectious disease outbreaks, it is important to re-evaluate public health responses and improve them where necessary. The use of epidemiological models to forecast caseloads, convey risk, and shape policy is a highly consequential area where public health institutions can improve their practices and communication to increase trust with their constituents. To facilitate the process of identifying appropriate models and selecting the best tool for the desired goals, we outline a structured framework for model selection and advocate for subsequent technical evaluation of model options to ensure the tool has both the necessary capabilities and also aids users in translating results into action. Using the framework outlined above, researchers and policymakers can make better-informed decisions about epidemiological modeling to support planning and response initiatives.
Author Contributions
A.W., A.O. and K.S.-W. conceived the study. J.C., A.C., S.C. and J.H. (John Hurst) conducted the research and model evaluation. J.C., P.T., A.C. and M.H. wrote the manuscript. J.C., P.T., R.W., J.H. (John Hunzeker), K.S.-W., A.O. and A.W. participated in a methodological review and editing. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Defense Threat Reduction Agency (DTRA) through prime contract HDTRA1-19-D-0007.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
Jennifer Corbin and Peyton Tebon are employed by Aeris LLC. At the time this work was conducted, Audrey Cerles, Michael Haverkate, Susan Campbell, and John Hurst were employed by Gryphon Scientific. Rachel Woodul is employed by Battelle. John Hunzeker is employed by ARA. The remaining authors declare no conflicts of interest.
Abbreviations
The following abbreviations are used in this manuscript:
CDC | Centers for Disease Control and Prevention |
CFA | Center for Forecasting and Outbreak Analytics |
COVID-19 | SARS-CoV-2 pandemic |
DICE | Dynamics of Interacting Community Epidemics |
EMOD | Epidemiological MODeling |
EPI | Epidemic Prediction Initiative |
FRED | Framework for Reconstructing Epidemic Dynamics – Model owned by Epistemix |
LEMMA | Local Epidemic Modeling for Management and Action |
LIDS | Lightweight Infectious Disease Simulator |
SIR | Susceptible-Infected-Recovered |
STEM | Eclipse Spatio-Temporal Epidemiological Modeler |
References
- Kermack, W.O.; McKendrick, A.G. A Contribution to the Mathematical Theory of Epidemics. Proc. R. Soc. Lond. Ser. A Contain. Pap. A Math. Phys. Character 1927, 115, 700–721. [Google Scholar]
- McBryde, E.S.; Meehan, M.T.; Adegboye, O.A.; Adekunle, A.I.; Caldwell, J.M.; Pak, A.; Rojas, D.P.; Williams, B.M.; Trauer, J.M. Role of modelling in COVID-19 policy development. Paediatr. Respir. Rev. 2020, 35, 57–60. [Google Scholar] [CrossRef]
- Ioannidis, J.P.A.; Cripps, S.; Tanner, M.A. Forecasting for COVID-19 has failed. Int. J. Forecast 2022, 38, 423–438. [Google Scholar] [CrossRef]
- Kreps, S.E.; Kriner, D.L. Model uncertainty, political contestation, and public trust in science: Evidence from the COVID-19 pandemic. Sci. Adv. 2020, 6, eabd4563. [Google Scholar] [CrossRef]
- Chin, V.; Samia, N.I.; Marchant, R.; Rosen, O.; Ioannidis, J.P.A.; Tanner, M.A.; Cripps, S. A case study in model failure? COVID-19 daily deaths and ICU bed utilisation predictions in New York state. Eur. J. Epidemiol. 2020, 35, 733–742. [Google Scholar] [CrossRef]
- Jewell, N.P.; Lewnard, J.A.; Jewell, B.L. Predictive Mathematical Models of the COVID-19 Pandemic: Underlying Principles and Value of Projections. JAMA 2020, 323, 1893–1894. [Google Scholar] [CrossRef]
- Paredes, M.R.; Apaolaza, V.; Marcos, A.; Hartmann, P. Predicting COVID-19 Vaccination Intention: The Roles of Institutional Trust, Perceived Vaccine Safety, and Interdependent Self-Construal. Health Commun. 2023, 38, 1189–1200. [Google Scholar] [CrossRef]
- Carroll, L.N.; Au, A.P.; Detwiler, L.T.; Fu, T.; Painter, I.S.; Abernethy, N.F. Visualization and analytics tools for infectious disease epidemiology: A systematic review. J. Biomed. Inform. 2014, 51, 287–298. [Google Scholar] [CrossRef]
- Kong, L.; Duan, M.; Shi, J.; Hong, J.; Chang, Z.; Zhang, Z. Compartmental structures used in modeling COVID-19: A scoping review. Infect. Dis. Poverty 2022, 11, 72. [Google Scholar] [CrossRef]
- Lorig, F.; Johansson, E.; Davidsson, P. Agent-based Social Simulation of the Covid-19 Pandemic: A Systematic Review. JASSS J. Artif. Soc. Soc. Simul. 2021, 24, 5. Available online: https://urn.kb.se/resolve?urn=urn:nbn:se:mau:diva-44859 (accessed on 30 April 2025). [CrossRef]
- Tang, L.; Zhou, Y.; Wang, L.; Purkayastha, S.; Zhang, L.; He, J.; Wang, F.; Song, P.X. A Review of Multi-Compartment Infectious Disease Models. Int. Stat. Rev. 2020, 88, 462–513. [Google Scholar] [CrossRef]
- Tolles, J.; Luong, T. Modeling Epidemics with Compartmental Models. JAMA 2020, 323, 2515–2516. [Google Scholar] [CrossRef]
- Mac, S.; Mishra, S.; Ximenes, R.; Barrett, K.; Khan, Y.A.; Naimark, D.M.; Sander, B. Modeling the coronavirus disease 2019 pandemic: A comprehensive guide of infectious disease and decision-analytic models. J. Clin. Epidemiol. 2021, 132, 133–141. [Google Scholar] [CrossRef]
- Mehalik, M.M.; Schunn, C. What Constitutes Good Design? A Review of Empirical Studies of Design Processes. Int. J. Engng Ed. 2006, 22, 519–532. [Google Scholar]
- CDC. Center for Forecasting and Outbreak Analytics. Available online: https://www.cdc.gov/forecast-outbreak-analytics/index.html (accessed on 30 April 2025).
- Epidemic Prediction Initiative. Available online: https://web.archive.org/web/20230825140245/https://predict.cdc.gov/ (accessed on 5 May 2025).
- Fenimore, P.; McMahon, B.; Hengartner, N.; Germann, T.; Mourant, J. A Suite of Mechanistic Epidemiological Decision Support Tools. Online J. Public Health Inform. 2018, 10, e62122. [Google Scholar] [CrossRef]
- Hadeed, S.J.; Broadway, K.M.; Schwartz-Watjen, K.T.; Tigabu, B.; Woodards, A.J.; Swiatecka, A.L.; Owens, A.N.; Wu, A. Notional Spread of Cholera in Haiti Following a Natural Disaster: Considerations for Military and Disaster Relief Personnel. Mil. Med. 2023, 188, e2074–e2081. [Google Scholar] [CrossRef]
- Dembek, Z.F.; Schwartz-Watjen, K.T.; Swiatecka, A.L.; Broadway, K.M.; Hadeed, S.J.; Mothershead, J.L.; Chekol, T.; Owens, A.N.; Wu, A. Coronavirus Disease 2019 on the Heels of Ebola Virus Disease in West Africa. Pathogens 2021, 10, 1266. [Google Scholar] [CrossRef]
- Broadway, K.M.; Schwartz-Watjen, K.T.; Swiatecka, A.L.; Hadeed, S.J.; Owens, A.N.; Batni, S.R.; Wu, A. Operational Considerations in Global Health Modeling. Pathogens 2021, 10, 1348. [Google Scholar] [CrossRef]
- Lawrence, A. Evaluating the Effectiveness of Public Health Measures During Infectious Disease Outbreaks: A Systematic Review. Cureus 2024, 16, e55893. [Google Scholar] [CrossRef]
- Wimberly, D.M. ArboMAP. Available online: http://ecograph.github.io/arbomap/ (accessed on 21 August 2025).
- IEM. COVID-19 Projection Dashboard. Available online: https://iem-modeling.com/ (accessed on 21 August 2025).
- predsci/DICE4. R. Predictive Science Inc. 2025. Available online: https://github.com/predsci/DICE4 (accessed on 18 August 2025).
- Web, E. Eclipse Spatio-Temporal Epidemiological Modeler|projects.eclipse.org. Available online: https://projects.eclipse.org/projects/technology.stem (accessed on 18 August 2025).
- Tools. IDMOD. Available online: https://www.idmod.org/tools/ (accessed on 18 August 2025).
- Epistemix|Navigate a Changing World. Available online: https://epistemix.com/ (accessed on 18 August 2025).
- Luo, W. Visual Analytics of Geo-Social Interaction Patterns for Epidemic Control. Int. J. Health Geogr. 2016, 15, 28. [Google Scholar] [CrossRef]
- Local Epidemic Modeling for Management & Action. Available online: https://localepi.github.io/LEMMA/index.html (accessed on 18 August 2025).
- USC COVID-19 Forecasts. Available online: https://scc-usc.github.io/ReCOVER-COVID-19/#/about (accessed on 18 August 2025).
- Zou, D.; Wang, L.; Xu, P.; Chen, J.; Zhang, W.; Gu, Q. Epidemic Model Guided Machine Learning for COVID-19 Forecasts in the United States. medRxiv 2020. [Google Scholar] [CrossRef]
- Wang, L.; Chen, J.; Marathe, M. DEFSI: Deep Learning Based Epidemic Forecasting with Synthetic Information. Proc. AAAI Conf. Artif. Intell. 2019, 33, 9607–9612. [Google Scholar] [CrossRef]
- Community Flu 2.0|Pandemic Influenza (Flu)|CDC. Available online: https://archive.cdc.gov/www_cdc_gov/flu/pandemic-resources/tools/communityflu.htm (accessed on 18 August 2025).
- GLEAMviz. Available online: https://www.gleamviz.org/ (accessed on 21 August 2025).
- Borse, R.H.; Shrestha, S.S.; Fiore, A.E.; Atkins, C.Y.; Singleton, J.A.; Furlow, C.; Meltzer, M.I. Effects of Vaccine Program against Pandemic Influenza A(H1N1) Virus, United States, 2009–2010. Emerg. Infect. Dis. 2013, 19, 439–448. [Google Scholar] [CrossRef]
- pyPM. pyPM.ca. Available online: https://pypm.github.io/home/ (accessed on 18 August 2025).
- Bussel, F.V. fvbttu/squider. MATLAB. 2022. Available online: https://github.com/fvbttu/squider (accessed on 21 August 2025).
- Kissler, S.M.; Tedijanto, C.; Goldstein, E.; Grad, Y.H.; Lipsitch, M. Projecting the transmission dynamics of SARS-CoV-2 through the postpandemic period. Science 2020, 368, 860–868. [Google Scholar] [CrossRef]
- Estimate Real-Time Case Counts and Time-Varying Epidemiological Parameters. Available online: https://epiforecasts.io/EpiNow2/ (accessed on 18 August 2025).
- KendrickOrg/kendrick. Smalltalk. KendrickOrg. 2025. Available online: https://github.com/KendrickOrg/kendrick (accessed on 18 August 2025).
- Siklafidis, T.; TheoSikla/LIDS. Pascal. 2021. Available online: https://github.com/TheoSikla/LIDS (accessed on 18 August 2025).
- NSSAC/PatchSim. Jupyter Notebook. NSSAC. 2024. Available online: https://github.com/NSSAC/PatchSim (accessed on 18 August 2025).
- Jr, R.C.R.; Collins, J.K.; Team, F.; Murray, C.J. Forecasting the Trajectory of the COVID-19 Pandemic Under Plausible Variant and Intervention Scenarios: A Global Modelling Study; Social Science Research Network: Rochester, NY, USA, 2022. [Google Scholar] [CrossRef]
- Kim, K.; Kim, S.; Lee, D.; Park, C.-Y. Impacts of social distancing policy and vaccination during the COVID-19 pandemic in the Republic of Korea. J. Econ. Dyn. Control 2023, 150, 104642. [Google Scholar] [CrossRef]
- Guerstein, S.; Romeo-Aznar, V.; Dekel, M.; Miron, O.; Davidovitch, N.; Puzis, R.; Pilosof, S.; Althouse, B.M. The interplay between vaccination and social distancing strategies affects COVID19 population-level outcomes. PLoS Comput. Biol. 2021, 17, e1009319. [Google Scholar] [CrossRef]
- Mura, M.; Trignol, A.; Le Dault, E.; Tournier, J.-N. Lessons for medical countermeasure development from unforeseen outbreaks. Emerg. Microbes Infect. 2025, 14, 2471035. [Google Scholar] [CrossRef]
- COVID-19 UK. Available online: https://imperialcollegelondon.github.io/covid19local/#details (accessed on 18 August 2025).
- McCabe, R.; Kont, M.D.; Schmit, N.; Whittaker, C.; Løchen, A.; Walker, P.G.; Ghani, A.C.; Ferguson, N.M.; White, P.J.; Donnelly, C.A.; et al. Communicating uncertainty in epidemic models. Epidemics 2021, 37, 100520. [Google Scholar] [CrossRef]
- Cramer, E.Y.; Ray, E.L.; Lopez, V.K.; Bracher, J.; Brennen, A.; Rivadeneira, A.J.C.; Gerding, A.; Gneiting, T.; House, K.H.; Huang, Y.; et al. Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States. Proc. Natl. Acad. Sci. USA 2022, 119, e2113561119. [Google Scholar] [CrossRef]
Table 1.
List of criteria for consideration when selecting an epidemiological model.
Table 1.
List of criteria for consideration when selecting an epidemiological model.
Category | Possible Criteria |
---|
Epidemiological Factors | - -
Pathogens included - -
Population characteristics
- ○
Subpopulation inclusion - ○
Prior/waning immunity - ○
Non-human hosts
- -
Geographic/environmental compatibility - -
Temporal suitability
- ○
Long range vs. short range
- -
Temporal resolution (hours, days, weeks, etc.)
|
Inputs and Outputs | - -
Inputs
- ○
Outbreak case data - ○
Transmission parameters
- -
Outputs
- ○
Metrics and level of detail - ○
Frequency - ○
Uncertainty
|
Model Structure | - -
Model type (compartment, agent-based, statistical) - -
Stochastic or deterministic
|
Computational Requirements | - -
Computational expense - -
Software required - -
Ability to update/Open source
|
User Interaction | - -
User interface - -
Level of expertise required - -
Model run time
|
Flexibility | - -
Can model multiple agents - -
Modifiable transmission and disease parameters - -
Modifiable populations - -
Geographic/temporal scalability
|
Mitigation | - -
Medical and non-medical countermeasures - -
Timing of interventions - -
Behavioral changes - -
Partial interventions and missed doses
|
Visualization | - -
Visualization tools - -
Interactive dashboards - -
Standardized output formats - -
Automated report generation
|
Credibility | - -
Accuracy
- ○
For a region of interest - ○
For time period of interest - ○
For specific circumstances
- -
Data sources and quality - -
Assumptions - -
Reproducibility - -
Developers - -
Documentation - -
Public perception
|
Table 2.
Evaluation criteria for example assessment.
Table 2.
Evaluation criteria for example assessment.
Category | Subcategory | Model Feature |
---|
Flexibility | Has the ability to adjust or model: | A generic respiratory disease | |
A novel disease | |
Diseases that would likely follow a natural disaster | |
Vector-borne diseases | |
Has the ability to adjust or model the following disease parameters: | Disease-specific parameters (e.g., R0, incubation period) | |
The effect of waning immunity, as seen in COVID-19 | |
The impact of multi-strain or change of (dominant) strain of agents | |
Co-infection with two pathogen variants | |
For vector-borne diseases, the variation of vectors under climate change scenarios | |
For vector-borne diseases, growth of the human population or changing interactions with the vector species | |
Has the ability to adjust or model the following transmission parameters: | Disease seasonality | |
Whether susceptibility or mortality is higher as a result of co-infection | |
Age differences in susceptibility | |
Sex differences in susceptibility | |
Modification of susceptibility in sub-populations (e.g., stratified by age/sex) and over time | |
Contributions of vectors and animal reservoirs to disease transmission | |
Major modes of transportation in the area | |
Forecast disease incidence based on seasonal trends | |
Has the ability to adjust or model the following population parameters: | Disease spread within a small population (<5000 people) | |
Customizable population size | |
Behavioral parameters of the population (e.g., patterns of social interaction) | |
Visualizations | Has the following visual elements: | Aspects/attributes of the model provide visualization for forecasts | |
Alternative methods for presenting an epidemiological outbreak, besides traditional epidemic curves | |
Mitigations | Has the following mitigations: | The ability to adjust or model pharmaceutical interventions | |
The ability to adjust or model non-pharmaceutical interventions | |
Impact of medical response on the outbreak progression can be modeled | |
Impact of foreign assistance withdrawal on the spread of disease can be modeled | |
Table 3.
Models with high flexibility scores in agent/pathogen selection.
Table 3.
Models with high flexibility scores in agent/pathogen selection.
Model Name | Model Developer | Agent List |
---|
Dynamics of Interacting Community Epidemics (DICE) [24] | Predictive Science Inc., San Diego, CA, USA | Chikungunya; Cholera; COVID-19; dengue; Ebola; Influenza; Lassa fever; Lyme disease; measles; MERS-COV; plague; SARS-CoV; yellow fever; Zika |
Epidemiological MODeling (EMOD) [26] | Institute for Disease Modeling, Bill and Melinda Gates Foundation, Seattle, WA, USA | Any |
EpiNow2 [39] | London School of Hygiene and Tropical Medicine London, United Kingdom | Many, no native support for VBDs |
Epistemix (FRED) [27] | University of Pittsburgh Pittsburgh, PA, USA; Epistemix, Pittsburgh, PA, USA | Any |
GLEAMViz [34] | Northeastern University Boston, MA, USA | Any |
Kendrick [40] | KendrickOrg, Atlanta, GA, USA | Any |
Lightweight Infectious Disease Simulator (LIDS) [41] | University of Thessaly Volos, Greece | Any |
Table 4.
Models with many user-defined parameters.
Table 4.
Models with many user-defined parameters.
Model Name | Model Developer | User-Defined Parameters |
---|
CommunityFlu 2.0 [33] | CDC Atlanta, GA, USA | Users can modify any of 320 inputs. Accounts for age (children, adults, and the elderly). |
Eclipse Spatio-Temporal Epidemiological Modeler (STEM) [25] | IBM Corporation, Armonk, NY, USA; Eclipse Foundation, Brussels, Belgium | Users can input agent-specific parameters (e.g., transmission rates, mortality rate, loss of immunity). |
Epidemiological MODeling (EMOD) [26] | Institute for Disease Modeling Bill and Melinda Gates Foundation, Seattle, WA, USA | The user can define populations and their demographics/geographies. The user can define aspects of the simulation, including disease characteristics like infectivity and route of transmission. Each agent (such as a human or vector) can be assigned a variety of “properties” (for example, age, gender, etc.). |
PatchSim [42] | University of Virginia Charlottesville, VA, USA | Users can control parameters across space and time with week-to-week and month-to-month granularity. |
PyPM [36] | University of Victoria Victoria, BC, Canada; TRIUMF, Vancouver, BC, Canada | Most parameters can be input by users. Compartments can be modified to users’ specifications. |
Table 5.
Models with broad geographic capabilities.
Table 5.
Models with broad geographic capabilities.
Model Name | Model Developer | Geographies | Spatial Granularity |
---|
Eclipse Spatio-Temporal Epidemiological Modeler (STEM) [25] | IBM Corporation Armonk, NY USA; Eclipse Foundation, Brussels, Belgium | Any | Country; state; county |
Epidemiological MODeling (EMOD) [26] | Institute for Disease Modeling Bill and Melinda Gates Foundation Seattle, WA, USA | Any | User-defined for any scale from hyperlocal (e.g., household) to regional to national |
SIkJalpha [30] | University of Southern California Los Angeles, CA, USA | Any (country); US (county, state) | Any (e.g., hospital, city, county, state, country) |
Table 6.
Models with strong visualization capabilities.
Table 6.
Models with strong visualization capabilities.
Model Name | Model Developer | Visualizations |
---|
GLEAMViz [34] | Northeastern University Boston, MA, USA | Output with maps, charts, and other useful graphics. |
IHME COVID-19 Model [43] | Institute for Health Metrics and Evaluation Seattle, WA, USA | Curves for each output for all selected locations, maps for locations within a region with locations color-coded according to outputs. |
Table 7.
Models that allow for significant modeling of pharmaceutical and non-pharmaceutical mitigations.
Table 7.
Models that allow for significant modeling of pharmaceutical and non-pharmaceutical mitigations.
Model Name | Model Developer | Mitigations and Interventions |
---|
Eclipse Spatio-Temporal Epidemiological Modeler (STEM) [25] | IBM Corporation Armonk, NY USA; Eclipse Foundation Brussels, Belgium | Able to incorporate diverse datasets, including real-time data (e.g., weather and environmental data). Can model spread across years for pandemics. Includes examples of model parameterization for notable diseases of interest, including endemic, emerging infectious, and vector-borne diseases. |
Epidemiological MODeling (EMOD) [26] | Institute for Disease Modeling, Bill and Melinda Gates Foundation Seattle, WA, USA | The user can define the interventions that take place. Complex interventions can be modeled (e.g., cascade of care for HIV infections). Mitigations can be applied to specific subpopulations. |
IHME COVID-19 Model [43] | Institute for Health Metrics and Evaluation Seattle, WA, USA | Shows how policy decisions impact the trajectory of COVID-19. Factors in important drivers of trends in COVID-19, such as vaccination rates, mobility data, and self-reported mask use. Accounts for vaccine usage and vaccine efficacy against different variants. Can model the efficacy of antivirals. |
Imperial College London COVID-19 Model/Epidemia [47] | Imperial College London London, United Kingdom | Can model the effect of non-pharmaceutical interventions, especially focusing on mobility and social interaction, and assumes that changes in Rt are an immediate response to interventions. Can also model the effect of pharmaceutical interventions. |
Local Epidemic Modeling for Management and Action (LEMMA) [29] | UC Berkeley Collaboration Berkeley, CA, USA | Supports long-term projections. Users may specify the timing and impact of various public health interventions, such as school closures and shelter-in-place orders. Interventions may occur before the current date to reflect public health interventions. Interventions may also occur after the current date and can be used to simulate the future course of the epidemic if measures are implemented or lifted at a future date. Multiple interventions may be applied at the same time. Vaccination data may be incorporated. |
| Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).