Next Article in Journal
The New Normal and the Era of Misknowledge—Understanding Generative AI and Its Impacts on Knowledge Work
Previous Article in Journal
Generative Artificial Intelligence and the Future of Public Knowledge
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Ethical Challenges in AI for Emergency Management

by
Xiaojun (Jenny) Yuan
1,*,
Qingyue Guo
2,
Yvonne Appiah Dadson
1,
Mahsa Goodarzi
1,
Jeesoo Jung
3,
Yanjun Dong
3,
Nisa Albert
1,
DeeDee Bennett Gayle
1,
Prabin Sharma
1,
Oyeronke Toyin Ogunbayo
1 and
Jahnavi Cherukuru
1
1
College of Emergency Preparedness, Homeland Security and Cybersecurity, University at Albany, State University of New York, Albany, NY 12222, USA
2
School of Information Management, Wuhan University, Wuhan 430072, China
3
School of Social Welfare, University at Albany, State University of New York, Albany, NY 12222, USA
*
Author to whom correspondence should be addressed.
Knowledge 2025, 5(3), 21; https://doi.org/10.3390/knowledge5030021
Submission received: 3 April 2025 / Revised: 16 July 2025 / Accepted: 17 July 2025 / Published: 21 September 2025

Abstract

As artificial intelligence (AI) technologies are increasingly integrated into emergency management, ethical considerations demand greater attention. Essential components of comprehensive emergency management include mitigation, preparedness, response, and recovery, which should serve as the foundation for integrating AI-driven science and technologies to effectively safeguard populations and infrastructure in times of crisis. This paper reviewed the ethical challenges of AI in emergency management in terms of critical issues, best practices, applications, emerging ethical considerations, and strategies addressing ethical challenges. Three core ethical themes are identified: algorithmic bias; privacy, transparency and accountability; and human–AI collaboration. This paper thoroughly analyzed the associated ethical challenges, reviewed the theoretical frameworks and proposed strategies to mitigate ethical challenges by strengthening the audits of algorithms, enhancing transparency in AI decision-making, and incorporating stakeholder engagement. Finally, the importance of creating policies to govern AI ethics was discussed.

1. Introduction

The increased frequency and intensity of disasters have posed severe threats to society, the economy, and the environment [1]. For instance, disasters like hurricanes, wildfires, pandemics, and infrastructure failures leave more than immediate damage—they often cause long-term consequences such as injuries, trauma, economic hardship, and social instability [2,3,4]. As climate change, pandemics, and a wide range of emergencies become increasingly frequent and severe, societies must continue to refine and adjust their emergency preparedness and disaster response strategies to mitigate effects from disasters, enhance resilience, and accelerate recovery efforts. Artificial intelligence (AI) is now increasingly leveraged to prepare for, respond to, and recover from disasters. Yet, these technological advancements are accompanied by profound ethical challenges that demand careful attention from policymakers, practitioners, researchers, and stakeholders.
The emergency management cycle defines four phases of emergency management: mitigation, preparedness, response, and recovery [5]. As each phase presents complex logistical, informational, and decision-making challenges, AI has emerged as a transformative tool in emergency management. AI-driven technologies provide significant advantages by analyzing vast datasets, predicting disaster patterns, optimizing resource allocation, and facilitating decision-making in real-time [6,7,8]. For example, Pacific Northwest National Laboratory’s Rapid Analysis of Disaster Response (RADR) software enables situational awareness and damage assessment within hours of a disaster, which illustrates AI’s potential to enhance emergency response efficiency [9]. AI applications (e.g., machine learning models for flood forecasting and evacuation planning) play an important role in modern emergency preparedness strategies by helping reduce casualties and economic losses [9].
AI has been utilized to improve many aspects of emergency management, from predicting and detecting disasters to optimizing response and recovery efforts. The growing reliance on AI in disaster management reflects broader global trends, including climate change, urbanization, and geopolitical instability [10,11,12], all of which increase the complexity of emergency scenarios. AI-powered predictive analytics, automation, and decision support systems enhance emergency response efforts, which enable public health agencies and disaster management organizations to operate more effectively despite resource constraints [13]. These advancements signal a paradigm shift, where AI is not a supplementary tool but a crucial component in disaster risk management, strengthening preparedness, response coordination, and recovery efforts [14]. To continue with these advancements, ethical considerations should be an integral part of AI implementation. Ethical concerns such as privacy, accountability, and the potential biases in AI algorithms pose risks that could impact the trust and reliability of emergency management processes [15]. As AI continues to reshape disaster response frameworks, it is imperative for researchers, practitioners, stakeholders, and policymakers to address these ethical challenges in a way that ensures AI applications in emergency management are equitable, transparent, and aligned with societal values. Policymakers should implement policies that help mitigate bias and uphold ethical standards in AI. By promoting ethical use of AI, stakeholders can maximize benefits while mitigating potential risks and strengthening global resilience to disasters.
This paper aims to provide a comprehensive review of the evolving interplay between AI ethics and emergency management, while highlighting both the transformative impacts of AI technologies and the ethical issues they introduce. The review is multidimensional, considering the technical, ethical, and practical facets of AI applications in emergency management. In particular, the paper addresses the following research question (RQ):
RQ: What are the key challenges of AI ethics in emergency management, and how can they be addressed?

2. Previous Work

2.1. AI in Emergency Management

Emergency management in the US involves various stakeholders, such as governments, non-governmental organizations, and local communities, each contributing to disaster preparedness, response, and long-term recovery efforts [16]. Given that emergency management necessitates a prompt response, AI can enhance effective and efficient communication and collaboration among multiple stakeholders during disasters by predicting outcomes, identifying vulnerable places, facilitating organized and streamlined communications, summarizing the main points, mitigating further threats, and accelerating the recovery process by optimizing planning [16,17,18,19]. Despite the advantages offered by the deployment of AI, certain risks exist. In emergency management, the risks around AI include the following: (1) exacerbating existing social inequalities through biased datasets; (2) privacy and security issues; (3) restricted opportunities for public engagement in disaster management; (4) restricting the involvement of human experts; and 5) the potential for exaggeration of AI capabilities due to private sector funding Global Facility for Disaster Reduction and Recovery [20]. The European Commission (2020) [21] proposed the first binding worldwide horizontal regulation on AI, the AI Act, setting a common framework for the use and supply of AI systems in the EU with a focus on a human-centric approach [22]. This regulatory framework for AI consists of seven key requirements: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental wellbeing; and accountability [21]. While such nationwide regulation does not exist in the U.S., a few states have adopted AI frameworks with limited scope, such as in California (S.B. 1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), Colorado (S.B. 24–207 Colorado Artificial Intelligence Act (CAIA)), Utah (S.B. 149 “Artificial Intelligence Policy Act”), and Tennessee (S.B. 2096 The ELVIS Act or Ensuring Likeness Voice and Image Security Act). However, new proposed legislation at the federal level may place a moratorium on state laws restricting AI regulations [23].
In the realm of disaster prediction and early warning, AI algorithms can analyze vast amounts of heterogeneous data from such sources as weather sensors, satellite imagery, and historical records to identify patterns and forecast the likelihood and severity of impending disasters such as hurricanes, floods, earthquakes, and wildfires [6,17,24,25,26,27]. Deep learning models, such as Recurrent Neural Networks with Long Short-Term Memory (LSTM), are well-suited for predicting the path and intensity of events because they excel at processing sequential data and forecasting time-dependent natural phenomena [28]. Other deep learning techniques combined with Convolutional Neural Networks contribute to building multimedia platforms for flood disaster management by analyzing spatial data from satellite imagery [28]. Furthermore, Deep Neural Networks can enhance weather and climate forecasting [29]. While time-series models can help predict natural disaster occurrences and public opinion trends, they are more suitable for analyzing data collected over time to predict future patterns [30]. Some machine learning algorithms can predict potential critical events and emergencies by analyzing diverse datasets, including weather patterns and historical crisis data [12]. The U.S. Geological Survey’s ShakeAlert system [31] and Google’s flood forecasting initiative [32] exemplify AI’s potential in this domain.

2.2. Ethical Issues Related to AI in Emergency Management

Fjeld et al. [33] identified eight globally recognized principles that guide the ethical implications of AI in emergency contexts: privacy, accountability, safety and security, transparency and explainability, fairness and non-discrimination, human control of technology, professional responsibility, and human values. As AI becomes more prominent in emergency management, researchers have begun to explore the ethical challenges it raises. In emergency response management, the ethical deployment of AI is not merely a technical issue but a fundamental humanitarian concern. Humanitarian standards for AI, as outlined in the Recommendation on the Ethics of Artificial Intelligence by the United Nations Educational, Scientific and Cultural Organization [34] and Principles for the Ethical Use of AI in the UN System the United Nations System Chief Executives Board for Coordination [35], emphasize the protection of human dignity, equality, and fundamental freedom. These standards call for the principles of no harm, justifiability, safety and security, equity, sustainability, transparency, accountability, and the protection of vulnerable populations during crises.
Firstly, algorithm recommendations have shown bias against minority groups, which reinforces social inequalities that already exist [36] This algorithmic bias was caused by biased training data or flawed algorithm design [16]. In times of crises, limited and fragmented information can fuel the spread of unreliable misinformation and disinformation through rumors and fake news [19,37]. The most recent case of the COVID-19 pandemic brought attention to the concept of an ‘infodemic,’ referring to the widespread circulation of inaccurate and deceptive information during a public health crisis [38]. Though some researchers contend that AI has the capability to detect misinformation (e.g., [19], there are still ongoing concerns about potential biases. Secondly, because disaster risk management has a strong humanitarian focus, concerns about privacy and security are often balanced against the need for openness and transparency [36]. However, such data often requires the collection and analysis of large amounts of substantial quantities of sensitive personal information [37]. The aggregation and application of large datasets raises inquiries regarding the potential threat to privacy and security [16]. Thirdly, predictions made by AI are often difficult to explain, even for the developers of the machine learning system [36,37]. Additionally, deciding who is responsible for managing the information generated by automated systems is also an important consideration [37]. Ensuring transparency and accountability is crucial for establishing trust with the public and identifying potential biases or errors [16]. Fourthly, hallucinations—defined as outputs that seem plausible but are factually inaccurate [39]—can occur due to constraints in the AI model’s understanding, training data biases, or the model’s tendency to generate outputs that appear contextually relevant [40]. They may pose a disproportionate risk to vulnerable populations who often lack the resources to detect or correct errors, thus causing inappropriate decision-making or interventions in social services that further exacerbate existing inequities [41]. Lastly, the integration of advanced AI systems in the workforce may result in a transformation of job roles, leading to the emergence of new opportunities while also potentially displacing some workers [16,36].
In sum, a significant amount of literature has been dedicated to exploring the ethical considerations of employing AI, yet there remain certain areas where research is lacking. There is a clear research gap, as existing studies rarely offer realistic or actionable strategies for tackling these challenges. For instance, no specific strategies have been proposed for building trust and public acceptance, ensuring fair access for underprivileged individuals and communities, developing ways to identify and address data biases, or enhancing human capabilities while taking cultural, linguistic, geographic, and organizational contexts into account [16,33,37].
In this paper, we begin by discussing the applications of AI in emergency management, followed by an exploration of current ethical considerations and strategies for addressing ethical challenges in this field. We conclude with suggestions for future research directions.
The literature for this narrative review was identified through targeted searches of English-language peer-reviewed publications using various combinations of keywords such as “AI,” “ethics,” “emergency management,” “risk management,” “disaster response”, “privacy”, “crisis”, “AI emergency response,” ‘machine learning,’ ‘crisis management’, “disaster” and “decision support.” Searches were conducted across databases including PubMed, IEEE Xplore, Web of Science and Google Scholar, covering publications available as of May 2024. Hand searches of journals and reference lists were also conducted to ensure comprehensive coverage. Bibliographic analysis was used to assess the relevance of the selected studies, focusing on thematic areas related to AI applications in emergency management and ethical considerations, and methodological approaches and ethical frameworks relevant to AI decision-making in emergency contexts. To capture key themes, this review does not include strict inclusion/exclusion criteria or a predefined time frame.

3. Applications of AI in Emergency Management

AI is proving effective for optimizing resource allocation and logistical planning during emergencies [42]. Machine learning models can help emergency managers determine the optimal positioning of supplies, equipment, and personnel based on such factors as infrastructure status, population density, and real-time ground conditions [42,43,44]. Machine learning and optimization algorithms support allocating resources effectively, minimizing response time, and responding to changing needs in emergency situations [45]. For instance, in the aftermath of Hurricane Harvey in 2017 and Hurricane Ian in 2022, AI-assisted damage assessment using satellite and drone imagery enabled aid to be allocated more efficiently to the hardest-hit areas [46,47,48,49]. Additionally, reinforcement learning optimizes task allocation in multi-robot emergency response systems to improve efficiency in dynamic environments [28].
In terms of real-time response and coordination, AI enhances both the capacity and speed of emergency services [25,50,51]. Natural language processing enables AI-powered chatbots and virtual assistants to handle and prioritize massive volumes of incoming requests for help [30,52,53], while computer vision enables drones and robots to autonomously search for survivors and provides critical situational awareness to rescue teams [54,55,56,57]. Deep learning also applies to images of disaster areas to identify survivors through image recognition and pattern detection [45]. Such techniques are important for rapidly analyzing visual information, such as crowdsourcing image data, and for assessing damage and identifying critical areas [58]. For example, the use of AI to guide firefighting strategies during the devastating 2019–20 Australian bushfires illustrates its potential in this regard [59,60]. Natural language processing techniques are useful for analyzing social media posts to provide real-time insights into public sentiments and situational updates during disasters by processing human language and extracting relevant information and key themes through vectorization [61,62]. Similarly, clustering algorithms can identify emergency-related topic areas on social media and highlight high-impact locations [61]. On the other hand, when it comes to identifying panic and misinformation on social media, AI classifiers help manage rumors and improve emergency communication [62]. Ontologies also structure information to support analysis and reasoning, such as for text classification [61,63]. Systems utilizing multi-modal data enabled situational awareness and damage assessment [62].
One transformative application of AI is in the deployment of drones, or unmanned aerial vehicles (UAVs), for disaster response tasks. AI-powered drones can deliver essential supplies to inaccessible areas, conduct search-and-rescue operations, and assess damage in hazardous environments. For example, during simulated earthquake relief operations in Tehran, 460 UAV helicopters equipped with AI were projected to transport nearly 100,000 kg of supplies from three supply centers to 44 demand points within just 2.5 h [64].
Experts predict that AI will become even more integral to emergency management. As edge computing continues to improve, more AI systems will be able to run directly on devices like drones and smart sensors, which leads to reduced latency and quicker, more localized responses [25,65]. Digital twins of cities, powered by machine learning, will facilitate more sophisticated disaster simulation and mitigation planning [66,67,68]. Meanwhile, improving the interpretability and robustness of AI models will be crucial for building trust and usability in high-stakes emergency applications [69,70].
Emergency crisis management using AI technologies considers place, time, and people in meaningful ways. For instance, AI can identify affected areas through geospatial analysis and satellite imagery [28], and data analysis using time-series models [30], helping teams respond more effectively to the areas that need it most. It also helps allocate resources more efficiently by pinpointing high-risk zones [71,72]. AI enables quicker responses by analyzing incoming data in real time to predict disasters and support informed decision-making [6,30]. It can also assess community needs and address the most urgent needs for marginalized communities [6,7,8,71].

4. Current Ethical Challenges of AI in Emergency Management

4.1. AI Algorithmic Bias

The rapid improvement of AI holds outstanding potential for improving decision-making in emergency management, but meaningful ethical challenges also arise that must be carefully addressed [72]. As AI becomes more deeply integrated into this critical field, it is crucial to think ahead about fairness, accountability, transparency, and human oversight to make sure these systems are used responsibly [73,74]. Although AI is prone to some cognitive functions like fatigue, it can replicate and amplify human and historical biases embedded in our society and in the training data [75,76,77,78]. This fundamental issue may arise when data is imbalanced, unreliable, or under-representative; and it can eventually lead to biased outcomes, discrimination or inequality, and inaccurate or misleading results [77]. AI systems pose the risk of amplifying existing biases present in historical data, which causes inequitable outcomes disproportionately affecting vulnerable populations [79,80]. For example, Obermeyer et al. [81] found that a popularly adopted healthcare AI algorithm for allocating resources underestimated the important needs of Black patients systematically. To address this issue, management groups should gather complete data, routinely check AI systems for impartiality, and have multiple parties help create and operate these tools [82,83].
Historical data from emergency management may lead to inequitable AI algorithms when used across all phases of disaster. Researchers have noted significant racial disparities [84,85,86,87], gender concerns [88,89], and exacerbated wealth inequality [90] in disaster recovery and aid distribution. During evacuation and search and rescue efforts, there has been documented LGBTQ bias in response [91], gender bias at shelters [92], and challenges with facial recognition software [93]. During logistics and planning, studies have discussed racial disparities with community planning [94], household planning [95], as well as decision biases among emergency managers [96] and some first responders [97].
In emergency management, biased algorithms in AI systems can cause unequal resource allocation, such as prioritizing certain areas or communities for aid or support based on flawed results and inaccurate risk assessment, while leaving other areas underserved [71,75]. Moreover, bias in AI systems can disproportionately affect vulnerable or tribal communities with limited access by excluding them from the planning processes [71,78,98]. Additionally, facial recognition technology with lower accuracy detecting darker skin could lead to misidentification and denial of resources during emergencies [99]. In emergency health departments with rapid decision-making, AI-driven triage systems could unfairly prioritize patients based on social factors such as race, gender, or disability regardless of their likelihood of survival [78]. Community perceptions of AI systems as biased could lead to a lack of trust and reduced adoption of beneficial AI technologies [75,100].

4.2. AI Transparency, Privacy and Accountability

Another key challenge is ensuring enough transparency, privacy, and accountability in AI-assisted decision-making [101,102]. In emergency management, all decisions can have life-or-death results [103,104]. Many AI systems, specifically deep learning models, are like black boxes, making it hard to understand how decisions are made or to figure out who’s responsible when something goes wrong [105], to explain the underlying decision-making processes and output rationale [76,106]. This lack of transparency can erode trust in emergency AI systems with high-stakes situations [75,107].
The emerging field of explainable AI focuses on addressing these problems by providing justifications for actions made by AI systems [107]. It is important to establish ways to audit, explain, and appeal decisions made by AI systems. Ensuring meaningful public scrutiny and redress requires new governance frameworks and technical solutions [108,109]. Given that AI systems often rely on personal data, this dependence introduces significant privacy risks [72,110]. Even though data-driven understandings can significantly improve emergency response efforts, it is important to ensure that the collection and use of personal data are transparent, secure and limited to legitimate purposes [33,111]. To address this, it requires data governance policies that give individuals control over their personal data and hold organizations accountable for responsible data practices [112,113].
For emergency managers to trust and use emergency AI systems, it is critical that they understand the reasoning behind decisions [100,106,107]. Besides explainability, transparency of training data in terms of data type, collection method, and bias potential are important in building trust [75]. Additionally, clear frameworks of accountability, ethical principles, and legal requirements are crucial for AI decision-making in high-stakes situations, especially those that significantly impact people’s lives [106,114]. To our knowledge, there is less consideration for vulnerable populations in the context of AI decision-making.
Information and communication systems, such as social media, play important roles during emergencies and often contain personally identifiable information (PII). Researchers and practitioners must be mindful of explicit or inferred PII (e.g., location data) to prevent any violation of individuals’ expectations of privacy [115]. Such issues have created tension between the need for collecting comprehensive data for adequate training, testing, and validating of AI models, and the principles of data minimization, which supports collecting only the data that is truly needed [78,116]. Collecting more data may lead to better models but can compromise privacy and transparency [78]. Individuals have the right to know what type of information they share, who has access and why, and the timeline of its storage without carrying the burden of excessive data collection [78,79]. However, data collection practices by organizations and governments have raised concerns among users. Legal restrictions, such as the Privacy Act, limit the collection and use of PII by prohibiting agencies from disclosing PII without written consent [116]. It is important to strike a balance between leveraging data to deliver services and safeguarding user interests, particularly during emergencies when survival and safety matter most. In the current AI systems, users have no way to protect their privacy other than making extreme choices at either end, opting in or opting out. The AI systems need to improve users’ privacy protection experience by controlling the degree of multiple privacy settings [99].

4.3. Human–AI Collaboration

There has been ongoing uncertainty about who is responsible for errors in AI system decision-making. This ambiguity stems from the unclear allocation of liability among users, developers, and the organizations that deploy these systems, which makes it difficult to determine who should be held accountable when something goes wrong [114]. Therefore, the argument for keeping humans in the loop during decision-making processes is becoming increasingly important. In addition to the work by Nunes and Jannach [117], the importance of preserving human judgment in AI-assisted decision-making has been emphasized, particularly by Jarrahi [118]. While AI is a powerful tool in managing disasters and emergencies, human judgment that provides oversight, context, error identification, recommendation validation, and bias correction is not replaceable [75,76,78,114,119].
AI can support individuals in performing various tasks during emergencies, but those with expertise should always remain in control [9,120]. Effective human–AI collaboration requires new training models, interface designs and organizational protocols so human operators can meaningfully interpret, question and override AI recommendations whenever needed [121,122]. Effectively proactive, cross-disciplinary collaboration among AI developers, emergency management practitioners, policymakers and the general public is needed for dealing with these quite varied moral challenges [72,109,123]. When using AI systems in emergencies, we must absolutely go beyond general moral ideas to deal with specific problems along with choices [124]. To make AI technologies that work well and fit with what society values and wants, the emergency management community can combine different ideas and skills [73,125].
One of the main challenges of AI integration into emergency management is the nature and possibility of effective human–AI collaborations, including the level of autonomy granted to AI systems and the degree of human oversight required [77,114,126,127,128,129]. While some researchers argue that the ultimate responsibility of decisions should depend on human decision-makers, with AI serving as a supportive tool, others acknowledge both the limitations and cognitive abilities of humans [77,114,128,129]. Especially in the case of emergency care, researchers propose AI as a supplement (not a replacement) to human caregivers to promote human–AI collaboration [78]. For instance, humans are still better suited to provide empathy, comfort, and a nuanced understanding of human needs than AI [76]. On the other hand, reliance on AI systems as efficacious collaborators depends on trust. Human–human collaborations have stronger trust bonds due to the absence of the established social contracts in human–AI partnerships [114]. Using emotions as a form of communication between humans and robots could be especially helpful in search and rescue situations [107]. Ethical concerns play a vital role when it comes to managing and identifying human remains during mass casualty events, and in making AI-driven decisions for unmanned aircraft systems [106,130]. However, different emergency cases require unique considerations of ethical dimensions [126]. Fully realizing the potential of AI in emergency management while carefully navigating its ethical challenges demands continuous commitment to in-depth dialog, thorough research and flexible governance [123,131]. Due to the evolving nature of the AI capabilities, a consistently forward-thinking approach is important for reducing risks and making ethical improvements [132,133]. By fully addressing these complexities, we can build public trust and make sure AI truly serves the public good in every crisis.

5. Addressing Ethical Challenges in Emergency Management

5.1. Framework/Guidelines for AI in Emergency Management

The application of AI in emergency management has the potential to bring about transformative advances that enhance human society’s ability to predict, prepare for, and respond to disasters. As AI gathers data from a multitude of sources, including social media, news media, and sensors, the inherent biases and prejudices present in these datasets will inevitably influence the decisions made by AI systems. This raises significant ethical concerns regarding the fairness, privacy, data security, and transparency of AI-driven systems, which could be addressed by robust ethical frameworks and guidelines [37].
In an extensive study, Floridi and colleagues [72] developed an AI4people framework through a meta-analysis of existing AI ethical frameworks, and proposed five principles: beneficence, non-maleficence, autonomy, justice, and explicability. The ethical principles of the AI4people framework provide a reference for applying AI in emergency management. Mass-care emergencies require ensuring vital resources like blood are available; some researchers are exploring how AI can better support healthcare coverage in such situations. To ensure that the use of AI in emergency management respects the rights and dignity of individuals and strives to provide the best possible care, Visave [37] proposed an ethical framework in mass-care scenarios that encompasses: (1) respect for individuals’ dignity, privacy, and autonomy; (2) transparency and explainability; (3) fairness and equity; (4) accountability and responsibility; (5) inclusion and participation; and (6) safety and risk mitigation. In the context of emergency rescue and treatment, it gathers additional personal information about the user. To achieve a more balanced approach to data collection and privacy protection, Masoumian Hosseini et al. [77] put forth an ethical framework comprising two principal elements: the designer and the user. This framework addresses the importance of user needs, preferences, and experiences while evaluating the ethical implications of each decision in accordance with the Asilomar AI principles [77].
It is imperative to critically examine how ethical guidelines can effectively inform and regulate practice. Taddeo et al. [134] provided a method for the explanation of AI ethics principles. It proposes a framework to identify the requirements at each stage of the AI lifecycle by effectively modeling the lifecycle. It also suggests learning from past frameworks to embed ethical goals in AI. For example, Nussbaumer et al. [135] put forth an ethical design approach in the context of emergency management decision support systems, which has been articulated and applied in the specific context of emergency management, and its overall concepts can be transferred to other domains to fit the context of AI technology applications.
A synthesis of previous research has led to the proposal of the “AI4EM” framework (see Figure 1), with the objective of promoting the more effective application of AI technologies in emergency management scenarios. Within this framework, user experience is considered to be as important as AI technology itself. We advocate a dual approach to ethical and moral dilemmas, with solutions being sought not only at the technological level, but also with a focus on the user experience and the needs of the user. These needs include privacy, autonomy, inclusiveness, fairness, and justice. The optimization of the user experience is achieved through relevant design. The overarching objective of this framework is to facilitate a harmonious coexistence between technology and humans, thereby establishing a collaborative relationship between humans and AI. The ultimate aim is to achieve ethical and humanized emergency management.

5.2. Strategies to Mitigate Ethical Challenges

AI can mitigate the impact of disasters by analyzing vast quantities of data to identify potential risks and assist individuals in making crucial decisions regarding emergency preparedness. Nevertheless, AI encounters several obstacles in its implementation in emergency management. The unavailability or compromised integrity of data during emergencies can result in delayed data analysis and processing, which in turn hinders effective disaster management [136]. Simultaneously, challenges such as data and algorithmic bias [137], the dehumanization of decision-making processes, a lack of transparency, and a lack of standardization will impede the development of effective AI for emergency response [137].
Taking these factors into account, we propose the following three potential strategies:
(1) Strengthening the audits of algorithms. As mentioned earlier, in the field of emergency management, it is crucial to implement more strict regulatory measures pertaining to algorithms and data for the purpose of safeguarding everyone’s right to equal protection during disasters. The objective of ensuring that algorithms meet the regulatory requirements in emergency management scenarios can be achieved through the examination of their governance guidelines and specification documents, the assessment of their outputs, and the evaluation of their internal operations. However, it is essential to acknowledge that algorithm audits are not exhaustive, and auditors are unable to test every aspect. Consequently, it is crucial to incorporate the perspectives of multiple stakeholders in the algorithm auditing process and to enhance algorithm transparency by continuously upgrading technological tools to enhance algorithm capabilities in emergency management scenarios.
(2) Enhancing transparency in AI decision-making. The use of AI in emergency management is further complicated by the lack of transparency surrounding its applications. This issue can make it harder for people to understand how AI makes decisions, which in turn makes it difficult to spot and correct errors in its outputs. Additionally, a lack of transparency can make it difficult to hold individuals or organizations accountable for the outcomes produced by AI [138]. As we have learned, explainability represents a significant aspect of AI transparency. Therefore, enhancing the transparency and credibility of AI can be effectively achieved by increasing the explainability of the AI decision-making process. Incorporating human input into the algorithmic lifecycle can help emergency managers better understand, trust, and define responsibilities in AI-driven decision-making. Furthermore, the involvement of humans in the decision-making process can facilitate the integration of empathy and help make decisions more compassionate and better aligned with human values.
(3) Incorporating stakeholder engagement. The integration of AI into emergency management demands a substantial investment of resources from a multitude of stakeholders, including financial, technological, and infrastructural capital. Furthermore, the advancement of equitable, dependable, and secure AI applications necessitates the involvement of all segments of society. It is essential that emergency response departments and organizations allocate sufficient financial and human resources to ensure the efficacy of emergency operations and the provision of relief. Moreover, they must facilitate the formulation of pertinent guidelines and measures to guarantee the appropriate implementation of AI in these operations. We should ensure that the technical aspects are guided by interdisciplinary research teams and experts from fields such as computer science, management, law, ethics, and other relevant disciplines. The convergence of knowledge across disciplines will help create more ethical applications of AI. Additionally, it is imperative to improve individuals’ AI literacy and awareness of protection measures in emergency situations. We believe that the resilience of future digital societies will be strengthened by a comprehensive understanding of AI ethics.

5.3. Policy-Related Ethical Challenges

In light of the well-known ethical challenges associated with AI, effective governance and regulation are essential. Policymakers and emergency officials, along with technology professionals, need to collaborate to create policies to govern the usage of AI which will aid decision-makers and stakeholders. For example, utilizing AI to help with sorting massive amounts of data during a disaster [139] will need policies to help prioritize the efforts needed to be implemented. Also, emergency response data often includes unstructured information, so policies are needed to ensure the data is analyzed with privacy and ethics in mind [140].
As government and public systems are key stakeholders in AI development, they can play a vital role in establishing guidelines for areas such as extreme weather and disaster management. Policymakers, in particular, help guide the responsible use of these technologies to improve efficiency and save lives [140]. In response to AI challenges, the Biden administration implemented policies under an Executive Order to ensure AI’s safe, secure, and trustworthy deployment across various sectors [141]. These policies mandate safety and security standards, demand regulatory transparency for dual-use AI models, and integrate AI risk management into critical national and public safety infrastructure. The Trump Administration launched a new Artificial Intelligence Action Plan in 2025 by a special committee comprising the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the Assistant to the President for Economic Policy, the Assistant to the President for Domestic Policy, the Director of the Office of Management and Budget (OMB Director), and the heads of such executive departments and agencies as the APST and APNSA deem relevant, is tasked with developing and submitting the plan to the President [142].
Policymaking around AI should be seen as an ongoing, evolving process. To support the AI community in emergency management, we need flexible, forward-thinking policies that can keep up with how quickly technology is changing. As AI continues to evolve, the policies governing its use must also be adapted to reflect the real needs and practices of emergency management practitioners. In sum, creating clear, flexible, and thoughtful policies is key to guiding the AI community in emergency management, where decisions can have a real impact on people’s lives.

6. Conclusions

This paper reviewed the status of the literature with the focus on the ethical challenges in emergency management in terms of critical issues and best practices, applications, emerging ethical considerations, and strategies addressing ethical challenges. This review acknowledged the rapid development and importance of using AI systems in emergency management, while identifying gaps in research related to the ethical challenges of AI in this field. It discovered main areas where AI applications have been popularly used, including disaster prediction, resource allocation, crisis communication, and decision-making processes. Next, it emphasized three major ethical themes: algorithmic bias; privacy, transparency and accountability; and human–AI collaboration. It then carefully examined the frameworks and guidelines, and strategies to mitigate the above-mentioned ethical challenges.
This paper contributes to the field of AI and emergency management by examining current literature to identify key issues and concerns, and by proposing strategies to address these concerns, such as strengthening the audits of algorithms, enhancing transparency in AI decision-making, and incorporating stakeholder engagement. From an academic perspective, this study brings together insights from various fields to help us better understand the AI ethical issues in emergency management. For policymakers, it offers practical takeaways by laying out reasonable ethical guidelines and strategies for using AI responsibly in the field of emergency management.
As extreme climate-induced emergencies become more frequent and severe, we should consider building AI models designed for extreme weather-related disasters (EWRDs) [143]. Marginalized communities should be involved in every phase of the development process, from the initial design stages to final implementation [144]. Our future research will focus on conducting user-centered experiments involving socially vulnerable populations in the context of AI-driven decision-making.

Author Contributions

Conceptualization, X.Y. and D.B.G.; Methodology, X.Y., D.B.G. and Y.A.D.; Investigation, Q.G.; Writing—original draft preparation, X.Y., Q.G., J.J., Y.D., M.G., N.A., O.T.O., Y.A.D. and P.S.; Writing—review and editing, X.Y., Q.G., D.B.G., Y.A.D., J.J., N.A. and J.C.; Visualization, Q.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Alimonti, G.; Mariani, L. Is the number of global natural disasters increasing? Environ. Hazards 2023, 23, 186–202. [Google Scholar] [CrossRef]
  2. Brackbill, R.; Alper, H.; Frazier, P.; Gargano, L.; Jacobson, M.; Solomon, A. An Assessment of Long-Term Physical and Emotional Quality of Life of Persons Injured on 9/11/2001. Int. J. Environ. Res. Public Health 2019, 16, 1054. [Google Scholar] [CrossRef]
  3. Office of EMS. Civil Unrest Resources. 2020. Available online: https://www.ems.gov/assets/Guidance_Resources_Civil_Unrest.pdf (accessed on 28 March 2025).
  4. United Nations. Economic Recovery After Natural Disasters. May 2016. Available online: https://www.un.org/en/chronicle/article/economic-recovery-after-natural-disasters (accessed on 28 March 2025).
  5. McLoughlin, D. Framework for integrated emergency management. Public Adm. Rev. 1985, 45, 165–172. [Google Scholar] [CrossRef]
  6. Chapman, A. Leveraging Big Data and AI for Disaster Resilience and Recovery. Engineering.tamu.edu. 5 June 2023. Available online: https://engineering.tamu.edu/news/2023/06/leveraging-big-data-and-ai-for-disaster-resilience-and-recovery.html (accessed on 28 March 2025).
  7. Quinn, J.A.; Nyhan, M.M.; Navarro, C.; Coluccia, D.; Bromley, L.; Luengo-Oroz, M. Humanitarian applications of machine learning with remote-sensing data: Review and case study in refugee settlement mapping. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2018, 376, 20170363. [Google Scholar] [CrossRef]
  8. Thekdi, S.; Tatar, U.; Santos, J.; Chatterjee, S. Disaster risk and artificial intelligence: A framework to characterize conceptual synergies and future opportunities. Risk Anal. 2022, 43, 1641–1656. [Google Scholar] [CrossRef]
  9. Amershi, S.; Weld, D.; Vorvoreanu, M.; Fourney, A.; Nushi, B.; Collisson, P.; Suh, J.; Iqbal, S.; Bennett, P.N.; Inkpen, K.; et al. Guidelines for human-ai interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar] [CrossRef]
  10. Almudallal, M. The Effect of Geopolitical Environment on the Influence of Crisis Management in Strategic Planning Process. Ph.D. Thesis, Universiti Teknologi Malaysia, Johor, Malaysia, 2019. Available online: https://eprints.utm.my/92419/1/MohammedWaleedAlmudallalPAHIBS2019.pdf.pdf (accessed on 28 March 2025).
  11. Flanagan, M. AI and Environmental Challenges. UPenn EII. Available online: https://environment.upenn.edu/events-insights/news/ai-and-environmental-challenges (accessed on 28 March 2025).
  12. Haghshenas, S.S.; Guido, G.; Haghshenas, S.S.; Astarita, V. The role of artificial intelligence in managing emergencies and crises within smart cities. In Proceedings of the 2023 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM), Cosenza, Italy, 13–15 September 2023; pp. 1–5. [Google Scholar] [CrossRef]
  13. Kolman, S.; Meyer, C. Workforce and Data System Strategies to Improve Public Health Policy Decisions. 2023. Available online: https://documents.ncsl.org/wwwncsl/Health/Workforce-Data-Systems_v02.pdf (accessed on 28 March 2025).
  14. United Nations Office for Disaster Risk Reduction. Disaster Risk Reduction & Disaster Risk Management. 2019. Available online: https://www.preventionweb.net/understanding-disaster-risk/key-concepts/disaster-risk-reduction-disaster-risk-management (accessed on 28 March 2025).
  15. Radanliev, P.; Santos, O.; Brandon-Jones, A.; Joinson, A. Ethics and responsible AI deployment. Front. Artif. Intell. 2024, 7, 1377011. [Google Scholar] [CrossRef]
  16. Boe, T.; Sayles, G. The Current State of Artificial Intelligence in Disaster Recovery: Challenges, Opportunities, and Future Directions. August 2023. Available online: https://digital.library.unt.edu/ark:/67531/metadc2289513/ (accessed on 20 July 2025).
  17. Bari, L.F.; Ahmed, I.; Ahamed, R.; Zihan, T.A.; Sharmin, S.; Pranto, A.H.; Islam, M.R. Potential use of artificial intelligence (AI) in disaster risk and emergency health management: A critical appraisal on environmental health. Environ. Health Insights 2023, 17, 11786302231217808. [Google Scholar] [CrossRef] [PubMed]
  18. Krichen, M.; Abdalzaher, M.S. Advances in ai and drone-based natural disaster management: A survey. In Proceedings of the 2023 20th ACS/IEEE International Conference on Computer Systems and Applications (AICCSA), Giza, Egypt, 4–7 December 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–6. [Google Scholar]
  19. Vicari, R.; Komendatova, N. Systematic meta-analysis of research on AI tools to deal with misinformation on social media during natural and anthropogenic hazards and disasters. Humanit. Soc. Sci. Commun. 2023, 10, 332. [Google Scholar] [CrossRef]
  20. GFDRR. Responsible Artificial Intelligence for Disaster Risk Management|GFDRR. 29 April 2021. Available online: https://www.gfdrr.org/en/publication/responsible-artificial-intelligence-disaster-risk-management (accessed on 3 August 2025).
  21. European Commission. A European Strategy for Smart, Sustainable and Inclusive Growth. 2020. Available online: https://ec.europa.eu/eu2020/pdf/COMPLET%20EN%20BARROSO%20%20%20007%20-%20Europe%202020%20-%20EN%20version.pdf (accessed on 20 May 2025).
  22. European Union. European Union Priorities 2024–2029. June 2024. Available online: https://european-union.europa.eu/priorities-and-actions/eu-priorities/european-union-priorities-2024-2029_en (accessed on 20 May 2025).
  23. Lee, N.T.; Stewart, J. States Are Legislating AI, But a Moratorium Could Stall Their Progress. 14 May 2025. Available online: https://policycommons.net/artifacts/21028890/states-are-legislating-ai-but-a-moratorium-could-stall-their-progress/21929321/ (accessed on 20 August 2025).
  24. Fang, J.; Hu, J.; Shi, X.; Zhao, L. Assessing disaster impacts and response using social media data in China: A case study of 2016 Wuhan rainstorm. Int. J. Disaster Risk Reduct. 2019, 34, 275–282. [Google Scholar] [CrossRef]
  25. Ghaffarian, S.; Taghikhah, F.R.; Maier, H.R. Explainable artificial intelligence in disaster risk management: Achievements and prospective futures. Int. J. Disaster Risk Reduct. 2023, 98, 104123. [Google Scholar] [CrossRef]
  26. Saravi, S.; Kalawsky, R.; Joannou, D.; Rivas Casado, M.; Fu, G.; Meng, F. Use of artificial intelligence to improve resilience and preparedness against adverse flood events. Water 2019, 11, 973. [Google Scholar] [CrossRef]
  27. Vamathevan, J.; Clark, D.; Czodrowski, P.; Dunham, I.; Ferran, E.; Lee, G.; Li, B.; Madabhushi, A.; Shah, P.; Spitzer, M.; et al. Applications of machine learning in drug discovery and development. Nat. Rev. Drug Discov. 2019, 18, 463–477. [Google Scholar] [CrossRef]
  28. Aboualola, M.; Abualsaud, K.; Khattab, T.; Zorba, N.; Hassanein, H.S. Edge technologies for disaster management: A survey of social media and artificial intelligence integration. IEEE Access 2023, 11, 73782–73802. [Google Scholar] [CrossRef]
  29. Misra, S.; Katz, B.; Roberts, P.; Carney, M.; Valdivia, I. Toward a person-environment fit framework for artificial intelligence implementation in the public sector. Gov. Inf. Q. 2024, 41, 101962. [Google Scholar] [CrossRef]
  30. Shi, K.; Peng, X.; Lu, H.; Zhu, Y.; Niu, Z. Application of social sensors in natural disasters emergency management: A Review. IEEE Trans. Comput. Soc. Syst. 2023, 10, 3143–3158. [Google Scholar] [CrossRef]
  31. U.S. Geological Survey. Earthquake Early Warning—Overview|U.S. Geological Survey. 26 January 2022. Available online: https://www.usgs.gov/programs/earthquake-hazards/science/earthquake-early-warning-overview (accessed on 28 March 2025).
  32. Nevo, S.; Anisimov, V.; Elidan, G.; El-Yaniv, R.; Giencke, P.; Gigi, Y.; Hassidim, A.; Moshe, Z.; Schlesinger, M.; Shalev, G.; et al. ML for Flood Forecasting at Scale (Version 1). arXiv 2019, arXiv:1901.09583. [Google Scholar] [CrossRef]
  33. Fjeld, J.; Achten, N.; Hilligoss, H.; Nagy, A.; Srikumar, M. Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI. 2020. Available online: https://dash.harvard.edu/server/api/core/bitstreams/c8d686a8-49e8-4128-969c-cb4a5f2ee145/content (accessed on 28 March 2025).
  34. United Nations Educational, Scientific and Cultural Organization. Ethics of Artificial Intelligence. 2021. Available online: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics (accessed on 28 March 2025).
  35. United Nations System Chief Executives Board for Coordination. Principles for the Ethical Use of Artificial Intelligence in the United Nations System. 2022. Available online: https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system (accessed on 28 March 2025).
  36. Jones, E.; Sagawa, S.; Koh, P.W.; Kumar, A.; Liang, P. Selective Classification Can Magnify Disparities Across Groups. arXiv 2020, arXiv:2010.14134. [Google Scholar]
  37. Visave, J. AI in Emergency Management: Ethical Considerations and Challenges. J. Emerg. Manag. Disaster Commun. 2024, 5, 165–183. [Google Scholar] [CrossRef]
  38. World Health Organization. Coronavirus Disease (COVID-19) Epidemiological Updates and Monthly Operational Updates. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/situation-reports (accessed on 20 May 2025).
  39. Hwang, Y.; Jeong, S.H. Generative artificial intelligence and misinformation acceptance: An experimental test of the effect of forewarning about artificial intelligence hallucination. Cyberpsychology Behav. Soc. Netw. 2025, 28, 284–289. [Google Scholar] [CrossRef]
  40. Athaluri, S.A.; Manthena, S.V.; Kesapragada, V.S.R.K.M.; Yarlagadda, V.; Dave, T.; Duddumpudi, R.T.S. Exploring the boundaries of reality: Investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references. Curēus 2023, 15, e37432. [Google Scholar] [CrossRef]
  41. Taeihagh, A. Governance of generative AI. Policy Soc. 2025, 44, 1–22. [Google Scholar] [CrossRef]
  42. Chen, J.; Chen, T.H.Y.; Vertinsky, I.; Yumagulova, L.; Park, C. Public–Private partnerships for the development of disaster resilient communities. J. Contingencies Crisis Manag. 2013, 21, 130–143. [Google Scholar] [CrossRef]
  43. Sun, W.; Bocchini, P.; Davison, B.D. Applications of artificial intelligence for disaster management. Nat. Hazards 2020, 103, 2631–2689. [Google Scholar] [CrossRef]
  44. Zhu, X.; Zhang, G.; Sun, B. A comprehensive literature review of the demand forecasting methods of emergency resources from the perspective of artificial intelligence. Nat. Hazards 2019, 97, 65–82. [Google Scholar] [CrossRef]
  45. Johnson, M.; Albizri, A.; Harfouche, A.; Tutun, S. Digital transformation to mitigate emergency situations: Increasing opioid overdose survival rates through explainable artificial intelligence. Ind. Manag. Data Syst. 2023, 123, 324–344. [Google Scholar] [CrossRef]
  46. Cao, Q.D.; Choe, Y. Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation Features. arXiv 2020, arXiv:2012.08624. [Google Scholar] [CrossRef]
  47. Cheng, G.; Han, J.; Lu, X. Remote sensing image scene classification: Benchmark and state of the art. Proc. IEEE 2017, 105, 1865–1883. [Google Scholar] [CrossRef]
  48. Ghaffarian, S.; Kerle, N.; Filatova, T. Remote sensing-based proxies for urban disaster risk management and resilience: A review. Remote Sens. 2018, 10, 1760. [Google Scholar] [CrossRef]
  49. Smith, A.B. U.S. Billion-Dollar Weather and Climate Disasters, 1980—Present (NCEI Accession 0209268); [Dataset]; NOAA National Centers for Environmental Information: Asheville, NC, USA, 2020. [Google Scholar] [CrossRef]
  50. Akhyar, A.; Asyraf Zulkifley, M.; Lee, J.; Song, T.; Han, J.; Cho, C.; Hyun, S.; Son, Y.; Hong, B.-W. Deep artificial intelligence applications for natural disaster management systems: A methodological review. Ecol. Indic. 2024, 163, 112067. [Google Scholar] [CrossRef]
  51. Chenais, G.; Lagarde, E.; Gil-Jardiné, C. Artificial intelligence in emergency medicine: Viewpoint of current applications and foreseeable opportunities and challenges. J. Med. Internet Res. 2023, 25, e40031. [Google Scholar] [CrossRef]
  52. Costa, D.B.; Pinna, F.C.d.A.; Joiner, A.P.; Rice, B.; de Souza, J.V.P.; Gabella, J.L.; Andrade, L.; Vissoci, J.R.N.; Néto, J.C. AI-based approach for transcribing and classifying unstructured emergency call data: A methodological proposal. PLoS Digit. Health 2023, 2, e0000406. [Google Scholar] [CrossRef]
  53. Stieglitz, S.; Mirbabaie, M.; Schwenner, L.; Marx, J.; Lehr, J.; Brünker, F. Sensemaking and Communication Roles in Social Media Crisis Communication. Wirtschaftsinformatik 2017 Proceedings. 2017. Available online: https://aisel.aisnet.org/wi2017/track14/paper/1 (accessed on 28 March 2025).
  54. Arnold, R.D.; Wade, J.P. A Definition of Systems Thinking: A Systems Approach. Procedia Comput. Sci. 2015, 44, 669–678. [Google Scholar] [CrossRef]
  55. Bejiga, M.B.; Zeggada, A.; Nouffidj, A.; Melgani, F. A convolutional neural network approach for assisting avalanche search and rescue operations with UAV imagery. Remote Sens. 2017, 9, 100. [Google Scholar] [CrossRef]
  56. Carrio, A.; Sampedro, C.; Rodriguez-Ramos, A.; Campoy, P. A review of deep learning methods and applications for unmanned aerial vehicles. J. Sens. 2017, 2017, 3296874. [Google Scholar] [CrossRef]
  57. Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the sky: Leveraging UAVs for disaster management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
  58. Bannour, W.; Maalel, A.; Ben Ghezala, H.H. Emergency management case-based reasoning systems: A survey of recent developments. J. Exp. Theor. Artif. Intell. 2023, 35, 35–58. [Google Scholar] [CrossRef]
  59. Boer, M.M.; Resco de Dios, V.; Bradstock, R.A. Unprecedented burn area of Australian mega forest fires. Nat. Clim. Change 2020, 10, 171–172. [Google Scholar] [CrossRef]
  60. Lu, S.; Jones, E.; Zhao, L.; Sun, Y.; Qin, K.; Liu, J.; Li, J.; Abeysekara, P.; Mueller, N.; Oliver, S.; et al. Onboard AI for fire smoke detection using hyperspectral imagery: An emulation for the upcoming Kanyini Hyperscout-2 Mission. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9629–9640. [Google Scholar] [CrossRef]
  61. Freeman, S. Artificial intelligence for emergency management. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications II; Pham, T., Solomon, L., Rainey, K., Eds.; SPIE: Bellingham, WA, USA, 2020; p. 50. [Google Scholar] [CrossRef]
  62. Saheb, T.; Sidaoui, M.; Schmarzo, B. Convergence of artificial intelligence with social media: A bibliometric & qualitative analysis. Telemat. Inform. Rep. 2024, 14, 100146. [Google Scholar] [CrossRef]
  63. Vernier, M.; Farinosi, M.; Foresti, A.; Foresti, G.L. Automatic Identification and geo-validation of event-related images for emergency management. Information 2023, 14, 78. [Google Scholar] [CrossRef]
  64. Khan, A.; Gupta, S.; Gupta, S.K. Multi-hazard disaster studies: Monitoring, detection, recovery, and management, based on emerging technologies and optimal techniques. Int. J. Disaster Risk Reduct. 2020, 47, 101642. [Google Scholar] [CrossRef]
  65. Gill, S.S.; Golec, M.; Hu, J.; Xu, M.; Du, J.; Wu, H.; Walia, G.K.; Murugesan, S.S.; Ali, B.; Kumar, M.; et al. Edge AI: A taxonomy, systematic review and future directions. Clust. Comput. 2025, 28, 18. [Google Scholar] [CrossRef]
  66. Fan, C.; Zhang, C.; Yahja, A.; Mostafavi, A. Disaster City Digital Twin: A vision for integrating artificial and human intelligence for disaster management. Int. J. Inf. Manag. 2021, 56, 102049. [Google Scholar] [CrossRef]
  67. Ford, D.N.; Wolf, C.M. Smart cities with digital twin systems for disaster management. J. Manag. Eng. 2020, 36, 04020027. [Google Scholar] [CrossRef]
  68. Wang, Y.; Yue, Q.; Lu, X.; Gu, D.; Xu, Z.; Tian, Y.; Zhang, S. Digital twin approach for enhancing urban resilience: A cycle between virtual space and the real world. Resilient Cities Struct. 2024, 3, 34–45. [Google Scholar] [CrossRef]
  69. Hassija, V.; Chamola, V.; Mahapatra, A.; Singal, A.; Goel, D.; Huang, K.; Scardapane, S.; Spinelli, I.; Mahmud, M.; Hussain, A. Interpreting Black-Box Models: A review on explainable artificial intelligence. Cogn. Comput. 2024, 16, 45–74. [Google Scholar] [CrossRef]
  70. Islam, M.R.; Ahmed, M.U.; Barua, S.; Begum, S. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 2022, 12, 1353. [Google Scholar] [CrossRef]
  71. Amidon, T.R.; Sackey, D.J. Justice in/and the Design of AI Risk Detection Technologies. In Proceedings of the 42nd ACM International Conference on Design of Communication, Fairfax, VA, USA, 20–22 October 2024; pp. 81–93. [Google Scholar] [CrossRef]
  72. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People-An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [PubMed]
  73. Morley, J.; Elhalal, A.; Garcia, F.; Kinsey, L.; Mökander, J.; Floridi, L. Ethics as a service: A pragmatic operationalisation of AI ethics. Minds Mach. 2021, 31, 239–256. [Google Scholar] [CrossRef]
  74. Wright, J.; Verity, A. Artificial Intelligence Principles for Vulnerable Populations in Humanitarian Contexts. Digital Humanitarian Network. 2020. Available online: https://www.academia.edu/41716578/Artificial_Intelligence_Principles_For_Vulnerable_Populations_in_Humanitarian_Contexts (accessed on 28 March 2025).
  75. Preiksaitis, C.; Ashenburg, N.; Bunney, G.; Chu, A.; Kabeer, R.; Riley, F.; Ribeira, R.; Rose, C. The role of large language models in transforming emergency medicine: Scoping review. JMIR Med. Inform. 2024, 12, e53787. [Google Scholar] [CrossRef]
  76. Hart-Davidson, B.; Ristich, M.; McArdle, C.; Potts, L. The history of technical communication and the future of generative AI. In Proceedings of the 42nd ACM International Conference on Design of Communication, Fairfax, VA, USA, 20–22 October 2024; pp. 253–258. [Google Scholar] [CrossRef]
  77. Masoumian Hosseini, M.; Masoumian Hosseini, S.T.; Qayumi, K.; Ahmady, S.; Koohestani, H.R. The aspects of running artificial intelligence in emergency care; A scoping review. Arch. Acad. Emerg. Med. 2023, 11, e38. [Google Scholar] [CrossRef]
  78. Pearson, Y.; Borenstein, J. Robots, ethics, and pandemics: How might a global problem change the technology’s adoption? In Proceedings of the 2020 IEEE International Symposium on Technology and Society (ISTAS), Tempe, AZ, USA, 12–15 November 2020; pp. 12–19. [Google Scholar] [CrossRef]
  79. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput. Surv. 2022, 54, 115. [Google Scholar] [CrossRef]
  80. Saxena, N.; Huang, K.; DeFilippis, E.; Radanovic, G.; Parkes, D.; Liu, Y. How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness. arXiv 2019, arXiv:1811.03654. [Google Scholar] [CrossRef]
  81. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019, 366, 447–453. [Google Scholar] [CrossRef]
  82. Daugherty, P.R.; Wilson, H.J.; Chowdhury, R. Using artificial intelligence to promote diversity. In Mit Sloan Management Review, How AI Is Transforming the Organization; The MIT Press: Cambridge, MA, USA, 2020; pp. 15–22. [Google Scholar] [CrossRef]
  83. Wong, P.-H. Democratizing algorithmic fairness. Philos. Technol. 2020, 33, 225–244. [Google Scholar] [CrossRef]
  84. Bolin, R.; Bolton, P. Book, Race, Religion, and Ethnicity in Disaster Recovery. Natural Hazards Center Collection. 1986. Available online: https://digitalcommons.usf.edu/nhcc/87 (accessed on 28 March 2025).
  85. Fitzpatrick, K.M.; Spialek, M.L. Hurricane Harvey’s Aftermath: Place, Race, and Inequality in Disaster Recovery; New York University Press: New York, NY, USA, 2020. [Google Scholar]
  86. Hartman, C.W.; Squires, G.D. There Is No Such Thing as a Natural Disaster: Race, Class, and Hurricane Katrina; Taylor & Francis: New York, NY, USA, 2006. [Google Scholar]
  87. Willison, C.E.; Singer, P.M.; Creary, M.S.; Greer, S.L. Quantifying inequities in US federal response to hurricane disaster in Texas and Florida compared with Puerto Rico. BMJ Glob. Health 2019, 4, e001191. [Google Scholar] [CrossRef]
  88. Hazeleger, T. Gender and disaster recovery: Strategic issues and action in Australia. Aust. J. Emerg. Manag. 2013, 28, 40–46. [Google Scholar]
  89. Rouhanizadeh, B.; Kermanshachi, S. Gender-based evaluation of economic, social, and physical challenges in timely post-hurricane recovery. Prog. Disaster Sci. 2021, 9, 100146. [Google Scholar] [CrossRef]
  90. Howell, J.; Elliott, J.R. Damages done: The longitudinal impacts of natural hazards on wealth inequality in the United States. Soc. Probl. 2019, 66, 448–467. [Google Scholar] [CrossRef]
  91. Goldsmith, L.; Raditz, V.; Méndez, M. Queer and present danger: Understanding the disparate impacts of disasters on LGBTQ+ communities. Disasters 2022, 46, 946–973. [Google Scholar] [CrossRef]
  92. Nicholson, K.L. Melting the iceberg. In Womenin Wildlife Science; Chambers, C.L., Nicholson, K.L., Eds.; Johns Hopkins University Press: Baltimore, MD, USA, 2022; pp. 336–361. [Google Scholar]
  93. Leslie, D. Tackling COVID-19 through responsible AI innovation: Five steps in the right direction. arXiv 2020, arXiv:2008.06755. [Google Scholar] [CrossRef]
  94. Rodriguez-Díaz, C.E.; Lewellen-Williams, C. Race and Racism as Structural Determinants for Emergency and Recovery Response in the Aftermath of Hurricanes Irma and Maria in Puerto Rico. Health Equity 2020, 4, 232–238. [Google Scholar] [CrossRef]
  95. Bethel, J.W.; Burke, S.C.; Britt, A.F. Disparity in disaster preparedness between racial/ethnic groups. Disaster Health 2013, 1, 110–116. [Google Scholar] [CrossRef]
  96. Roberts, P.S.; Wernstedt, K. Decision biases and heuristics among emergency managers: Just like the public they manage for? Am. Rev. Public Adm. 2019, 49, 292–308. [Google Scholar] [CrossRef]
  97. Kennel, J. IHI ID 08 Emergency medical services treatment disparities by patient race. BMJ Open Qual. 2018, 7 (Suppl. S1), A12. [Google Scholar] [CrossRef]
  98. Gupta, S.; Chen, Y.-C.; Tsai, C. Utilizing large language models in tribal emergency management. In Proceedings of the 29th International Conference on Intelligent User Interfaces, Greenville, SC, USA, 18–21 March 2024; pp. 1–6. [Google Scholar] [CrossRef]
  99. Cao, S.; Wang, C.; Yang, Z.; Yuan, H.; Sun, A.; Xie, H.; Zhang, L.; Fang, Y. Evaluation of smart humanity systems and novel uv-oriented solution for integration, resilience, inclusiveness and sustainability. In Proceedings of the 2020 5th International Conference on Universal Village (UV), Boston, MA, USA, 24–27 October 2020; pp. 1–28. [Google Scholar] [CrossRef]
  100. Pereira, G.V.; Wimmer, M.; Ronzhyn, A. Research needs for disruptive technologies in smart cities. In Proceedings of the 13th International Conference on Theory and Practice of Electronic Governance, Athens, Greece, 23–25 September 2020; pp. 620–627. [Google Scholar] [CrossRef]
  101. Coeckelbergh, M. AI Ethics; The MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  102. Wachter, S.; Mittelstadt, B.; Floridi, L. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int. Data Priv. Law 2017, 7, 76–99. [Google Scholar] [CrossRef]
  103. Burwell, S.; Sample, M.; Racine, E. Ethical aspects of brain computer interfaces: A scoping review. BMC Med. Ethics 2017, 18, 60. [Google Scholar] [CrossRef] [PubMed]
  104. Zicari, R.V.; Brodersen, J.; Brusseau, J.; Dudder, B.; Eichhorn, T.; Ivanov, T.; Kararigas, G.; Kringen, P.; McCullough, M.; Moslein, F.; et al. Z-Inspection®: A process to assess trustworthy AI. IEEE Trans. Technol. Soc. 2021, 2, 83–97. [Google Scholar] [CrossRef]
  105. Raji, I.D.; Smart, A.; White, R.N.; Mitchell, M.; Gebru, T.; Hutchinson, B.; Smith-Loud, J.; Theron, D.; Barnes, P. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 Conference on Fairness, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 33–44. [Google Scholar] [CrossRef]
  106. Hamissi, A.; Dhraief, A. A survey on the unmanned aircraft system traffic management. ACM Comput. Surv. 2024, 56, 68. [Google Scholar] [CrossRef]
  107. Akgun, S.A.; Ghafurian, M.; Crowley, M.; Dautenhahn, K. Using Emotions to Complement Multi-Modal Human-Robot Interaction in Urban Search and Rescue Scenarios. In Proceedings of the 2020 International Conference on Multimodal Interaction, Virtual Event, 22–29 October 2020; pp. 575–584. [Google Scholar] [CrossRef]
  108. Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethics Inf. Technol. 2018, 20, 5–14. [Google Scholar] [CrossRef]
  109. Taddeo, M.; Floridi, L. How AI can be a force for good. Science 2018, 361, 751–752. [Google Scholar] [CrossRef]
  110. Senarath, A.; Arachchilage, N.A.G. Understanding Software Developers’ Approach towards Implementing Data Minimization. arXiv 2018, arXiv:1808.01479. [Google Scholar] [CrossRef]
  111. Reddy, E.; Cakici, B.; Ballestero, A. Beyond mystery: Putting algorithmic accountability in context. Big Data Soc. 2019, 6, 2053951719826856. [Google Scholar] [CrossRef]
  112. Jaume-Palasí, L.; Spielkamp, M. Ethics and algorithmic processes for decision making and decision support. AlgorithmWatch Work. Pap. 2017, 2, 1–19. [Google Scholar]
  113. Mai, J.-E. Big data privacy: The datafication of personal information. Inf. Soc. 2016, 32, 192–199. [Google Scholar] [CrossRef]
  114. Richter, R.M.; Valladares, M.J.; Sutherland, S.C. Effects of the source of advice and decision task on decisions to request expert advice. In Proceedings of the 24th International Conference on Intelligent User Interfaces, Marina del Ray, CA, USA, 17–20 March 2019; pp. 469–475. [Google Scholar] [CrossRef]
  115. Imran, M.; Castillo, C.; Diaz, F.; Vieweg, S. Processing social media messages in mass emergency: A survey. ACM Comput. Surv. 2015, 47, 67. [Google Scholar] [CrossRef]
  116. King, J.; Ho, D.; Gupta, A.; Wu, V.; Webley-Brown, H. The privacy-bias tradeoff: Data minimization and racial disparity assessments in U.S. Government. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, 12–15 June 2023; pp. 492–505. [Google Scholar] [CrossRef]
  117. Nunes, I.; Jannach, D. A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adapt. Interact. 2017, 27, 393–444. [Google Scholar] [CrossRef]
  118. Jarrahi, M.H. Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Bus. Horiz. 2018, 61, 577–586. [Google Scholar] [CrossRef]
  119. Herrera, L.C.; Gjøsæter, T.; Majchrzak, T.A.; Thapa, D. Signals of transition in support systems: A Study of the use of social media analytics in crisis management. ACM Trans. Soc. Comput. 2025, 8, 1–44. [Google Scholar] [CrossRef]
  120. Kamar, E. Directions in hybrid intelligence: Complementing AI systems with human intelligence. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, New York, NY, USA, 9–15 July 2016; pp. 4070–4073. [Google Scholar]
  121. Cai, C.J.; Winter, S.; Steiner, D.; Wilcox, L.; Terry, M. “Hello AI”: Uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proc. ACM Hum.-Comput. Interact. 2019, 3, 104. [Google Scholar] [CrossRef]
  122. Elish, M.C. Moral crumple zones: Cautionary tales in human-robot interaction. Engag. Sci. Technol. Soc. 2019, 5, 40–60. [Google Scholar] [CrossRef]
  123. Cath, C.; Wachter, S.; Mittelstadt, B.; Taddeo, M.; Floridi, L. Artificial Intelligence and the “Good Society”: The US, EU, and UK approach. Sci. Eng. Ethics 2018, 24, 505–528. [Google Scholar] [CrossRef]
  124. Whittlestone, J.; Nyrup, R.; Alexandrova, A.; Dihal, K.; Cave, S. Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research. 2019. Available online: https://www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf (accessed on 3 August 2025).
  125. Leslie, D. Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. 2019. Available online: https://digital.library.unt.edu/ark:/67531/metadc2289556/ (accessed on 3 August 2025).
  126. Gavidia-Calderon, C.; Kordoni, A.; Bennaceur, A.; Levine, M.; Nuseibeh, B. The IDEA of us: An identity-aware architecture for autonomous systems. ACM Trans. Softw. Eng. Methodol. 2024, 33, 164. [Google Scholar] [CrossRef]
  127. Hohendanner, M.; Ullstein, C.; Miyamoto, D.; Huffman, E.F.; Socher, G.; Grossklags, J.; Osawa, H. Metaverse Perspectives from Japan: A Participatory Speculative Design Case Study. Proc. ACM Hum.-Comput. Interact. 2024, 8, 400. [Google Scholar] [CrossRef]
  128. Aghaei, N.G.; Shahbazi, H.; Farzaneh, P.; Abdolmaleki, A.; Khorsandian, A. The structure of personality-based emotional decision making in robotic rescue agent. In Proceedings of the 2008 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi’an, China, 2–5 July 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 1201–1206. [Google Scholar]
  129. Ulusan, A.; Narayan, U.; Snodgrass, S.; Ergun, O.; Harteveld, C. “Rather solve the problem from scratch”: Gamesploring human-machine collaboration for optimizing the debris collection problem. In Proceedings of the 27th International Conference on Intelligent User Interfaces, Helsinki, Finland, 22–25 March 2022; pp. 604–619. [Google Scholar] [CrossRef]
  130. Vaswani, V.; Caenazzo, L.; Congram, D. Corpse identification in mass disasters and other violence: The ethical challenges of a humanitarian approach. Forensic Sci. Res. 2024, 9, owad048. [Google Scholar] [CrossRef] [PubMed]
  131. Gasser, U.; Almeida, V.A.F. A layered model for AI governance. IEEE Internet Comput. 2017, 21, 58–62. [Google Scholar] [CrossRef]
  132. Brundage, M.; Avin, S.; Clark, J.; Toner, H.; Eckersley, P.; Garfinkel, B.; Dafoe, A.; Scharre, P.; Zeitzoff, T.; Filar, B.; et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. arXiv 2024, arXiv:1802.07228. [Google Scholar] [CrossRef]
  133. Floridi, L. Translating principles into practices of digital ethics: Five risks of being unethical. Philos. Technol. 2019, 32, 185–193. [Google Scholar] [CrossRef]
  134. Taddeo, M.; Blanchard, A.; Thomas, C. From AI Ethics Principles to Practices: A Teleological Methodology to Apply AI Ethics Principles in The Defence Domain. Philos. Technol. 2024, 37, 42. [Google Scholar] [CrossRef]
  135. Nussbaumer, A.; Pope, A.; Neville, K. A framework for applying ethics-by-design to decision support systems for emergency management. Inf. Syst. J. 2023, 33, 34–55. [Google Scholar] [CrossRef]
  136. Jayawardene, V.; Huggins, T.J.; Prasanna, R.; Fakhruddin, B. The role of data and information quality during disaster response decision-making. Prog. Disaster Sci. 2021, 12, 100202. [Google Scholar] [CrossRef]
  137. Gerlach, R. NextGen Emergency Management and Homeland Security: The AI Revolution. 30 April 2023. Available online: https://www.linkedin.com/pulse/next-generation-emergency-management-homeland-ai-gerlach-mpa-mep?trk=public_post_reshare_feed-article-content (accessed on 3 August 2025).
  138. Balasubramaniam, N.; Kauppinen, M.; Hiekkanen, K.; Kujala, S. Transparency and Explainability of AI Systems: Ethical Guidelines in Practice. In Requirements Engineering: Foundation for Software Quality; Gervasi, V., Vogelsang, A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 3–18. [Google Scholar] [CrossRef]
  139. Gupta, T.; Roy, S. Applications of artificial intelligence in disaster management. In Proceedings of the 2024 10th International Conference on Computing and Artificial Intelligence, Bali Island, Indonesia, 26–29 April 2024; pp. 313–318. [Google Scholar] [CrossRef]
  140. Gupta, S.; Modgil, S.; Kumar, A.; Sivarajah, U.; Irani, Z. Artificial intelligence and cloud-based Collaborative Platforms for Managing Disaster, extreme weather and emergency operations. Int. J. Prod. Econ. 2022, 254, 108642. [Google Scholar] [CrossRef]
  141. AI.GOV. Administration Actions on AI. 30 October 2023. Available online: https://www.dhs.gov/archive/news/2023/10/30/fact-sheet-biden-harris-administration-executive-order-directs-dhs-lead-responsible (accessed on 28 March 2025).
  142. AI.GOV. AI Action Plan. 2025. Available online: https://www.ai.gov/action-plan (accessed on 12 August 2025).
  143. Camps-Valls, G.; Fernández-Torres, M.Á.; Cohrs, K.H.; Höhl, A.; Castelletti, A.; Pacal, A.; Robin, C.; Martinuzzi, F.; Papoutsis, I.; Prapas, I.; et al. Artificial intelligence for modeling and understanding extreme weather and climate events. Nat. Commun. 2025, 16, 1919. [Google Scholar] [CrossRef] [PubMed]
  144. Kumar, R.; Pathak, K.; Yesmin, W. Trending Interdisciplinary Research in 2025. 2025. Available online: http://cenagaon.digitallibrary.co.in/bitstream/123456789/165/1/Trending.pdf (accessed on 12 August 2025).
Figure 1. An ethical framework about AI for emergency management.
Figure 1. An ethical framework about AI for emergency management.
Knowledge 05 00021 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, X.; Guo, Q.; Dadson, Y.A.; Goodarzi, M.; Jung, J.; Dong, Y.; Albert, N.; Bennett Gayle, D.; Sharma, P.; Ogunbayo, O.T.; et al. A Review of Ethical Challenges in AI for Emergency Management. Knowledge 2025, 5, 21. https://doi.org/10.3390/knowledge5030021

AMA Style

Yuan X, Guo Q, Dadson YA, Goodarzi M, Jung J, Dong Y, Albert N, Bennett Gayle D, Sharma P, Ogunbayo OT, et al. A Review of Ethical Challenges in AI for Emergency Management. Knowledge. 2025; 5(3):21. https://doi.org/10.3390/knowledge5030021

Chicago/Turabian Style

Yuan, Xiaojun (Jenny), Qingyue Guo, Yvonne Appiah Dadson, Mahsa Goodarzi, Jeesoo Jung, Yanjun Dong, Nisa Albert, DeeDee Bennett Gayle, Prabin Sharma, Oyeronke Toyin Ogunbayo, and et al. 2025. "A Review of Ethical Challenges in AI for Emergency Management" Knowledge 5, no. 3: 21. https://doi.org/10.3390/knowledge5030021

APA Style

Yuan, X., Guo, Q., Dadson, Y. A., Goodarzi, M., Jung, J., Dong, Y., Albert, N., Bennett Gayle, D., Sharma, P., Ogunbayo, O. T., & Cherukuru, J. (2025). A Review of Ethical Challenges in AI for Emergency Management. Knowledge, 5(3), 21. https://doi.org/10.3390/knowledge5030021

Article Metrics

Back to TopTop