Enhancing System Safety and Reliability through Integrated FMEA and Game Theory: A Multi-Factor Approach

: This study aims to address the limitations of traditional Failure Mode and Effect Analysis (FMEA) in managing safety and reliability within complex systems characterized by interdependent critical factors. We propose an integrated framework that combines FMEA with the strategic decision-making principles of Game Theory, thereby enhancing the assessment and mitigation of risks in intricate environments. The novel inclusion of the Best Worst Method (BWM) and Pythagorean fuzzy uncertain linguistic variables refines the accuracy of risk evaluation by overcoming the inherent deficiencies of conventional FMEA approaches. Through sensitivity analysis, the framework’s efficacy in identifying and prioritizing failure modes is empirically validated, guiding the development of targeted interventions. The practical application of our methodology is demonstrated in a comprehensive healthcare system analysis, showcasing its versatility and significant potential to improve operational safety and reliability across various sectors. This research is particularly beneficial for systems engineers, risk managers, and decision-makers seeking to fortify complex systems against failures and their effects.


Introduction
Integrating risk assessment tools within complex healthcare systems is paramount in ensuring the safety and quality of stakeholders' care, well-being, and the overall efficiency of healthcare delivery.Healthcare, by its very nature, is a complex ecosystem where myriad factors, both clinical and non-clinical, interact, making it susceptible to various forms of risk.These risks can manifest as medical errors, patient safety incidents, financial challenges, regulatory non-compliance, and even public health crises, such as viral respiratory illness [1].
The importance of risk assessment in healthcare lies in its ability to proactively identify, evaluate, and mitigate potential risks, thereby preventing adverse events, optimizing resource allocation, and improving the overall performance of healthcare organizations [2].A practical risk assessment framework not only enhances patient safety but also safeguards the reputation of healthcare institutions, reduces financial losses, and ensures compliance with regulatory requirements [3,4].
In recent years, healthcare systems have faced unprecedented challenges, most notably with the emergence of the pandemic.This global crisis underscored the critical need for robust risk assessment and management practices within healthcare.The pandemic's rapid spread, overwhelming healthcare facilities, and straining of medical resources highlighted the imperative of proactive risk assessment to prepare for and respond to unforeseen events.The importance of risk assessment in healthcare cannot be overstated.It serves as a systematic approach to identifying, evaluating, and addressing potential hazards that risk assessment practices, particularly their inability to manage complex and uncertain healthcare environments adeptly.
The study concludes with a set of hypotheses that posits the superior performance of our proposed methodologies in risk assessment over traditional methods.It also outlines our scientific contributions to the field, which include a more systematic approach to decision-making in healthcare risk management and the potential for these methods to be applied to other domains requiring fine risk assessment.By explicitly addressing these research questions and hypotheses, our work endeavors to provide a more straightforward and more impactful addition to the body of knowledge in healthcare risk management.
The remainder of this article unfolds as follows: Section 2 conducts a comprehensive literature review on Game Theory, elucidating fundamental definitions and introducing various classes of game strategies.In Section 3, we introduce a hybrid model with a primary emphasis on Game Theory.Section 4 studies a real-world application, examining a healthcare unit in a hospital operating under the emergent conditions.Section 5 offers validation for the proposed approach, while Section 6 undertakes sensitivity analysis to assess the consistency and robustness of the hybrid model.Finally, Section 7 provides a conclusion, addressing the challenges encountered in the present study and outlining potential avenues for future research.

The Concept of Game Theory
In exploring Game Theory within this study, we explore several foundational concepts elucidated in reference [56].A game is an interactive model encompassing two or more groups participating in strategic interactions.Within these games, the entities known as players take on the role of decision-makers, each representing various entities ranging from individuals to groups or even abstract concepts.The state of the game is defined as the collection of all conceivable conditions under which the players engage, setting the stage for their strategic interplay.
Players are faced with a selection of actions, each representing the possible decisions or moves available to them within the various states of the game.These actions lead to outcomes that are quantified in terms of payoffs-a numerical value assigned to the results of the players' actions, indicative of the gains or losses accrued as the game progresses.Strategies then emerge as comprehensive plans of action tailored to the players' objectives and the circumstances they face within the game.
A pivotal concept in Game Theory is that of equilibrium, where players, recognizing their interdependence, see no benefit in unilaterally changing their strategy as it could potentially lead to a less favorable payoff.
In sum, these concepts are the building blocks of Game Theory, providing a framework to model and analyze strategic interactions within complex systems.The section concludes that understanding these strategic frameworks is crucial not only for theoretical purposes but also for practical applications where decision-making processes are influenced by the actions and reactions of various stakeholders within a system.The equilibrium concept, in particular, serves as a cornerstone for predicting behaviors and outcomes, thereby informing the development of strategies in diverse fields ranging from economics to political science.

Introducing the Different Classes of Strategies
In Game Theory, two types of strategies are employed: pure and mixed strategies [57,58].Pure Strategy: In this approach, players make definitive decisions for every possible game state.Each player's strategy consists of a set of pure tactics, and all participating players aim to optimize their strategies.The game score remains equal for all players in this case.Numerous studies have leveraged pure strategies within the area of Game Theory [59][60][61][62].
Mixed Strategy: A mixed strategy involves a probabilistic blend of pure strategies and finds applications in various studies across different domains [63].In this approach, a player randomly selects a pure strategy, and players have the flexibility to employ multiple mixed strategies, even if they have a limited set of pure strategies.It is important to note that, in a specific scenario, a pure strategy aligns with a mixed strategy when the probability of choosing a specific pure strategy equals 1, with the probability of selecting other strategies set at 0. Additionally, players can opt for either mixed or pure strategies depending on whether they face deterministic or probabilistic situations, respectively.

Nash Equilibrium
A 'Nash Equilibrium,' in essence, represents a situation in which none of the participants have an incentive to deviate from their chosen strategies, even in the absence of any formal rules or enforcement.To illustrate, consider two players, Alice and Bob, each selecting strategies X and Y, respectively.In this context, (X, Y) is termed a 'Nash Equilibrium' if, when Alice has no alternative strategies, sticking with X maximizes her payoff in response to Bob's choice of Y.Likewise, Bob, in the absence of alternative strategies, finds Y to be the optimal choice for maximizing his payoff in response to Alice's selection.
Building on the foundational principles of Game Theory as applied to our initial two-player game with Alice and Bob, we extend the scenario by introducing two more players-Carol and David-thereby transforming the dynamic into a more complex fourplayer match.In this expanded setting, the strategy profiles (X, Y, Z1, Z2) represent the decisions made by Alice, Bob, Carol, and David, respectively.In this context, a 'Nash Equilibrium' is a strategic configuration where no player can unilaterally improve their outcome by choosing a different strategy, given the methods chosen by all other players.
Here, X stands for the optimal strategic decision for Alice, premised on the assumption that Bob, Carol, and David are adopting strategies Y, Z1, and Z2, respectively.Similarly, Y is Bob's optimal response when Alice chooses X and Carol and David adhere to Z1 and Z2.The variables Z1 and Z2 are particularly noteworthy as they represent the best response strategies for Carol and David, respectively.Z1 is Carol's best response to the combination of strategy (X, Y, Z2), while Z2 is David's best response to the strategy (X, Y, Z1).
By explaining the role of Z1 and Z2, we clarify their function within the Nash Equilibrium concept.These variables account for the additional layers of complexity introduced by more than two players in the game.Including Carol and David in the game matrix necessitates a recalibration of strategies for all players, ensuring that the equilibrium encapsulates the best possible strategy for each player in response to the others.This equilibrium, therefore, is a delicate balance where each player, considering the strategies of all others, concludes that they are better off sticking to their current strategy rather than changing course.This interplay of strategic decisions lies at the heart of Nash Equilibrium in a multiplayer game context.
In contemporary terms, the concept of 'Nash Equilibrium' is precisely defined with regard to mixed strategies, wherein participating players opt for a 'probability distribution over possible pure strategies' [64,65].The notion of a 'mixed-strategy equilibrium' was originally introduced by John von Neumann and Oskar Morgenstern in their seminal 1944 work [66].They demonstrated that a 'mixed-strategy Nash Equilibrium' exists in all possible zero-sum games featuring a finite set of actions [67].A zero-sum game, in theoretical terms, can be described as follows: one player's gain is balanced against the other player's loss, resulting in a total payoff summing to zero.Furthermore, cooperation between these two players is absent [68][69][70][71][72].The mathematical representation of the 'Nash Equilibrium' concerning a zero-sum game is subsequently presented in Section 3.3 through definitions 8 and 9.

Game Classification
Game classification is based on several properties in Game Theory [56].Considering the nature of the decision-making problem, the proper class of game is required to be selected to solve a problem.In general, the games can be categorized into three classes.These three classes are (i) Static/Dynamic Games, (ii) zero-sum or non-zero-sum Games, and (iii) Cooperative/non-cooperative Games.

Static/Dynamic Games
In a static game, players make decisions without knowledge of the choices made by other players, meaning that all players make their decisions simultaneously in the static game [73].Authors in previous studies [74][75][76] have employed static games in their research.Conversely, in a dynamic game, players can take into account the decisions made by others when making their own choices, implying that all players do not necessarily make decisions simultaneously in dynamic games.A static game can be seen as a specific case within the broader category of dynamic games [77].In dynamic games, some actions may occur simultaneously, while others can unfold at different time intervals.Several researchers have explored static games in their studies, including examples by [78][79][80][81][82][83][84][85].

Zero-Sum Games
The primary focus of our current study is a category of games where the overall score remains constant throughout the game [86].In these games, there are no score increases or decreases during gameplay.Instead, one player's gain corresponds directly to another player's loss.In essence, in a zero-sum game, there is always a loser for every winner, making it inherently a win-lose game.This particular aspect of Game Theory has found application in numerous works across diverse domains [87][88][89][90][91]. In Section 3.3, we explore the explanation of zero-sum games, which are utilized for ranking failures in the FMEA procedure.Conversely, a non-zero-sum game is one where all players can benefit.In these games, the total sum of profits and losses is either greater or less than zero.Scholars from various fields have explored non-zero-sum games in their research endeavors [92][93][94].

Cooperative/Non-Cooperative Games
A non-cooperative game is characterized by players pursuing their individual profit maximization without cooperation from others at the outset of the game [56].This class of games places a primary emphasis on the strategies employed by individual players.Non-cooperative games have found extensive use in various applications across different domains [95][96][97][98].In contrast, cooperative games involve a coalition of players from the same group or union working together to maximize their collective gains.For instance, if we consider a set of players {A, B, C, D, E, and F}, there can be three different player unions, such as {{A, B, C}, {D, E}, and {F}}, where each union aims to maximize their combined benefits.Similarly, cooperative games have been extensively applied in the literature across diverse applications [99][100][101].

Proposed Methodology
This research introduces a comprehensive and reliable framework designed to determine consistent risk priorities for various failure modes, as depicted in Figure 1.This framework is structured into three pivotal stages:

•
Pre-step: The process of collecting all necessary information about the system under investigation is a meticulous endeavor that forms the bedrock of our research.It involves a comprehensive review and assimilation of data, encompassing structural details, operational dynamics, historical performance metrics, and contextual factors that influence the system.This critical step requires a multi-faceted approach, engaging with various stakeholders for insights, examining relevant documentation, and utilizing analytical tools to capture the complexities of the system.Such thorough data collection ensures a robust foundation upon which meaningful analysis can be performed, hypotheses can be tested, and accurate conclusions can be drawn, ultimately contributing to the credibility and reliability of the research outcomes.

•
Step 1: Assessing the Weight of Risk Factors: In this foundational stage, the focus is on quantifying the relevance of different risk factors.Utilizing the BWM method-ology [102], we gauge the importance weights tied to the severity, occurrence, and detection of said risk factors.

•
Step 2: Formulating the Payoff Evaluation Matrix: After ascertaining the importance weights, the next step is the formulation of the payoff evaluation matrix.This matrix is crafted based on insights and evaluations from an expert panel of decision-makers.
To address the inherent uncertainties, this phase incorporates Pythagorean fuzzy uncertain linguistic variables.

•
Step 3: Determining Risk Priorities of Failure Modes: The zenith of our framework is to present a detailed assessment of risk priorities associated with distinct failure modes.In this context, a zero-sum game methodology is leveraged to pinpoint the optimal strategies for both scenarios involving failure modes and those without.Our conception of risk factors is aligned with the standard definitions found in widely recognized frameworks and guidelines in risk assessment and management.Specifically, we adhere to the reports and classifications of risk factors outlined by industry standards such as ISO 31000, which provides principles and generic guidelines on risk management.Per these standards, risk factors in our study are identified through a systematic process involving hazard identification, risk analysis, and risk evaluation.
Each risk factor is categorized into one of three primary aspects: • Severity (S): This refers to the potential impact or consequences of a failure mode on the system's functionality, the environment, or the end-users.It measures the extent of harm or disruption that could result from the failure.

•
Occurrence (O): This dimension assesses the likelihood or frequency of a particular failure mode occurring.It estimates the probability that the risk will materialize based on historical data, predictive models, or expert judgment.• Detectability (D): Detectability evaluates how easily a failure mode can be discovered before it leads to an operational failure.This involves assessing the effectiveness of current detection processes or control measures.Preceding these primary stages is an essential preliminary step.Here, a meticulous collection of all pertinent data related to the process in focus is undertaken.This leads to the identification of potential failure modes.For each failure mode pinpointed, an exhaustive analysis is carried out.This includes a rigorous review of control measures, the underlying causes, and the possible outcomes of each mode.Within this framework's scope, it is imperative to understand "failure" as a situation where a crucial function or procedure does not meet the expected standards [103].
Our conception of risk factors is aligned with the standard definitions found in widely recognized frameworks and guidelines in risk assessment and management.Specifically, we adhere to the reports and classifications of risk factors outlined by industry standards such as ISO 31000, which provides principles and generic guidelines on risk management.Per these standards, risk factors in our study are identified through a systematic process involving hazard identification, risk analysis, and risk evaluation.
Each risk factor is categorized into one of three primary aspects: • Severity (S): This refers to the potential impact or consequences of a failure mode on the system's functionality, the environment, or the end-users.It measures the extent of harm or disruption that could result from the failure.

•
Occurrence (O): This dimension assesses the likelihood or frequency of a particular failure mode occurring.It estimates the probability that the risk will materialize based on historical data, predictive models, or expert judgment.

•
Detectability (D): Detectability evaluates how easily a failure mode can be discovered before it leads to an operational failure.This involves assessing the effectiveness of current detection processes or control measures.
By employing these three dimensions, we can systematically quantify and prioritize risks, ensuring that the most significant chances-those likely to have the greatest impact, which are the most probable, and are the hardest to detect-are managed with appropriate urgency and resources.
Throughout this manuscript, we consistently apply this tripartite model of risk factors to analyze and evaluate the potential failure modes in the system under study.This approach allows us to construct a risk profile that is comprehensive, refined, and tailored to the specific operational context of the system, thereby facilitating informed decision-making and effective risk management.
Let us, for an FMEA worksheet, take a set of failure modes a FM i = {FM 1 , FM 2 , FM 3 , . . ., FM i }, which is known in FMEA procedure as the "strategy of failure" for l number of decision-makers.In addition, let us take DM = {DM1, DM2, DM3, . . . ,DM l }, as a set of risk factors, RF i = {S FM i , O FMi , D FMi } as a "strategy of success", and ω j = {ω 1 , ω 2 , ω 3 , . . . ,ω n } as the importance weight of decision-makers based on their quality profile in which 0 ≤ ω j ≤ 1 and ∑ n j=1 ω j = 1.Thus, all employed decision-makers share their individual payoff judgments of FM i regarding the risk factors (RF j ) using Pythagorean fuzzy uncertain linguistic variables.
In addition, the l payoff matrices can be derived from is the uncertain linguistic assessment.Uncertain linguistic assessment obtained from the decision-makers over FM i with respect to the risk factors RF j collected from decision-maker l according to the linguistic term set P = S 0 , S 1 , . . ., S g .According to the obtained outcomes, a zero-sum game between failure and success can be demonstrated as B = {Failure, Success, FM, RF, P}.In the subsequent sections, we present a detailed overview of the comprehensive processes involved in the proposed FMEA method.We explore the intriguing perspective of viewing FMEA through the lens of a zero-sum game, acknowledging its practical implications.In a noteworthy study by [69], Game Theory was harnessed to rank alternative solutions in the context of emergency decision-making.In light of this, we adopt the premise that FMEA can be framed as a zero-sum game problem, and proceed to outline our methodology.
How does the proposed model consider the barriers to mitigation and prevention?Our model is conscious of the obstacles that can hinder effective risk mitigation and prevention, particularly in complex systems.It is structured to capture these barriers within the initial pre-step and throughout the framework by integrating stakeholder feedback, historical data, and expert judgment.This holistic approach ensures that the risk prioritization identifies the most critical risks and the factors that may impede successful intervention.By encompassing these barriers in our payoff evaluation matrix and subsequent analysis, our model offers a dynamic and realistic platform for risk management that is sensitive to the obstacles inherent in the practical application of risk mitigation and prevention strategies.
It should be added that, in the proposed methodology, while we adhere to the traditional FMEA framework that incorporates risk variables such as severity, occurrence, and detectability, we also recognize the crucial gap that exists between detecting a failure mode and effectively mitigating or preventing it.The detectability variable quantifies the likelihood of identifying a failure mode before it manifests into a functional failure, yet it does not encompass the subsequent processes of mitigation and prevention.
To bridge this gap, our model enhances the conventional FMEA by embedding additional evaluative dimensions that assess the system's readiness and capability to respond to a detected failure mode.We introduce a 'Response Efficacy' variable that complements the detectability score by measuring the effectiveness and timeliness of the mitigation or prevention strategies that are, or can be, put in place once a failure mode is detected.This variable considers factors such as the availability of resources, the agility of the response system, the presence of backup systems, and the preparedness of personnel to implement corrective measures.
Furthermore, our model operationalizes this 'Response Efficacy' assessment by integrating Pythagorean fuzzy uncertain linguistic variables, which allow for a fine and flexible quantification of risk management capabilities.Decision-makers can thus express varying degrees of confidence in the system's ability to handle potential failures, accommodating real-world systems' inherent uncertainties and complexities.
The proposed methodology provides a more comprehensive view of the risk landscape by incorporating 'Response Efficacy' as a distinct factor within the risk priority calculations.This ensures that the FMEA identifies and ranks failure modes based on their detectability and the system's overall preparedness to address and neutralize risks effectively.It allows for a more informed and holistic risk management approach, where the end goal is not merely to detect risks but to be well-equipped to manage them adeptly.This advancement in the methodology acknowledges that the true measure of a system's resilience lies in its capacity to respond to and recover from disruptions, thus offering a more accurate and actionable assessment of risk priorities.

Computing the Importance Weight of Risk Factors Utilizing BWM
The BWM offers a promising alternative to the Analytic Hierarchy Process (AHP) for calculating the importance weights of risk factors [102,104].BWM requires fewer data comparisons compared to AHP, yielding more robust and consistent results in pairwise comparisons.BWM techniques have found wide-ranging applications across various domains, as evidenced by [105][106][107][108].In the context of this study, we employ BWM to determine the importance weights of three distinct risk factors-Severity, Occurrence, and Detection-within the framework of FMEA.In the current state of research, numerous scholars [109][110][111][112][113][114] have explored ways to integrate BWM with FMEA, primarily to (i) assess the importance weights of risk factors and (ii) assign weights to failure modes while subsequently ranking them.In line with these endeavors, our motivation lies in integrating the BWM tool with FMEA to ascertain the importance weights of the risk factors under examination in this study.
The procedure of BWM is briefly explained as follows: (i) Identifying most of the minor significant risk factors.The most critical risk factor, RF B , and the least important risk factor, RF W , have to be determined by decisionmakers' opinions from the known n risk factor.A fine approach is employed in the methodology proposed to discern the spectrum of risk factors within a system, extending from the most to the least significant.The criticality of these factors is determined through a qualitative analysis led by decision-makers who are wellversed in the intricacies of the system at hand.Recognizing the most critical risk factor denoted as RF B , and the least significant one, RF W , is pivotal in establishing a hierarchy of risks that guides the focus of risk management efforts.
Considering less significant risk factors in the analysis is both strategic and practical.While these factors may have a lower impact on the system, their cumulative effect or impact under specific conditions can be non-trivial.By including these minor factors, decision-makers can ensure a comprehensive risk assessment, leaving no potential vulnerability unaddressed.This inclusion aligns with the principles of thoroughness and precaution in risk management, especially in complex systems where seemingly minor risks can propagate or interact with other factors to cause significant issues.
To clarify the process, decision-makers typically leverage tools such as FMEA to evaluate and rank the criticality of risk factors.However, the initial identification of these factors often relies on their expertise and experiential judgment.The FMEA tool then provides a structured framework to analyze the identified factors, quantifying their severity, occurrence, and detectability to arrive at an RPN.This number assists in objectively determining the criticality of each risk factor.In practice, achieving consensus among decision-makers on the significance of risk factors can be challenging, mainly when relying on qualitative assessments.To mitigate this, our model incorporates mechanisms for reconciling differing opinions, such as employing a Delphi method or consensus-building workshops.These methods facilitate structured communication and negotiation, allowing for the emergence of a collective judgment on the risk factors' criticality.In instances where consensus is elusive, the model adapts by assigning possible values to each risk factor, reflecting the spectrum of expert opinions.This range is then utilized in sensitivity analyses to determine how variations in risk criticality assessment could influence the system's overall risk profile.Such an approach ensures that the model remains robust and applicable despite subjective variability, thus maintaining its utility and relevance in real-world risk management scenarios.(ii) Assessing the priority of the most critical risk factor relative to others.Next, the group of decision-makers collaboratively express their judgments concerning the significance of the primary risk factor compared to the remaining risk factors, utilizing the established nine-scale table in the existing literature.Additionally, we calculate the vector representing the best-to-others (BO) preference, which is defined as RF, k = 1, 2, 3, . . ., l, as follows: where RF k Bj is the opinion of the RF B over the RF j , and RF BB = 1.Consider that the l decision-makers' importance weight are equal.Hence, l best-to-others vectors have the possibility to be further combined into a best-to-others vector RF BO = (RF B1 , RF B2 , . . . ,RF Bn ) using the following equation: (iii) Computing the preference of the other risk factor over the most critical risk factor.
Similarly, l others-to-worst vector RF OW , for k = 1, 2, 3, . . ., l is computed by comparing to the other risk factor over the worst risk factor using nine-scale, as in the following equation: where RF k jW is the judgement of the RF j over the RF W , and RF WW = 1.Therefore, l other-to-worst vectors can be combined into a worst-to-others vector RF OW = (RF 1W , RF 2W , . . . ,RF nW ) using the following equation: (iv) Calculate the optimum risk factors' importance weights.In BWM, the ratio of W B W j and W j W W is followed by W B W j = RF Bj and W j W W = RF jW .For satisfying the above-mentioned conditions, a resolution must be determined by maximizing the value of W B W j − RF Bj and minimizing the value of RF jW − Therefore, the subsequent mathematical programming model determines the optimum risk factors' weight: Model 1 can be re-established into Model 2 as a linearization process: minξ Subject to.
The optimum risk factors' importance weights are computed by solving Model 2 and are signified as w * = w * 1 , w * 2 , . . ., w * n .It is worth noting that in the final step, it is also possible to determine the aggregated optimal importance weights.This implies that the optimal importance weights for each risk factor are initially derived from individual decision-makers' perspectives.Subsequently, factoring in the significance of each decision-maker's input, we arrive at the aggregated importance weight for the risk factors.

(v) Calculate the consistency ratio of results
To calculate the consistency value, first, it is essential to obtain the consistency ratio as follows: where CR is recognized as the consistency index according to the maximum value of ξ [102].
As much as the value of CR is small, the results would have the better consistency.In the current study, CR ≤ 0.2 is acceptable and there is in this case no need to further revise the process interactively.

Constructing the Group of Payoff Evaluation Matrix Utilizing Pythagorean Fuzzy Uncertain Linguistic Variables
Zadeh [115] argues for the concept of linguistic variables and their practical applications.Linguistic variables are linguistic expressions consisting of one or more words that convey the intrinsic value of a variable.This approach assists decision-makers in addressing ambiguities and uncertainties in data, particularly in complex decision-making scenarios where precise numerical values may be challenging to define [116,117].Let us take β = β 0 , β 1 , . . ., β g as a finite set, and completely well-ordered discrete linguistic terms having odd cardinality, in which β i denotes a possible value for a linguistic term [69,[118][119][120].
In 2013, Yager [121] introduced the concept of Pythagorean fuzzy sets (PFS) to fulfill a particular condition, wherein the sum of the squares of membership and non-membership degrees is constrained to be less than or equal to one [122][123][124][125].We will now go into the fundamental concepts, definitions, and subsequent advancements related to PFS.
Definition 2 [130].Let us take X as a discourse universe where β θ() , β τ() indicates an uncertain linguistic variable.A Pythagorean fuzzy uncertain linguistic variable ∼ P in X can be defined as follows: where  ∼ P (),  ∼ P () is a PFS, which denotes the membership and non-membership degree of  ∈ X , respectively, into the β θ() , β τ() .
In addition, the indeterminacy degree of  ∈ X is defined as linguistic variables, which further can be denoted as Definition 3 [130,131].Let us take ⟩ as two different Pythagorean fuzzy uncertain linguistic vari- ables.In such case, some important operational laws of Pythagorean fuzzy uncertain linguistic variables can be defined as follows: Definition 4 [130,132].Let us take β = β 0 , β 1 , . . ., β g as linguistic set terms and (, )⟩ as Pythagorean fuzzy uncertain linguistic variables.Therefore, the score function of ∼ P can be determined as follows: Safety 2024, 10, 4 13 of 35 Moreover, the accuracy function of ∼ P can be determined as follows: Definition 5 [130,132].Let us take ⟩ as two different Pythagorean fuzzy uncertain linguistic variables.
In such a case, the "Hamming distance" between ∼ P 1 and ∼ P 2 can be determined as follows: Definition 6 [132].Let us take ∼ P as the collection of Pythagorean fuzzy uncertain linguistic variables; ⟩, where j = 1, 2, . . ., n, and the "Pythagorean fuzzy uncertain linguistic prioritized weighted averaging operator" is In such as case, the "Pythagorean fuzzy uncertain linguistic prioritized weighted averaging operator" can be defined as follows: Up to this point, the preliminary PFS and Pythagorean fuzzy uncertain linguistic variables have been explained.Next, the four steps to construct the group payoff evaluation matrix are further described as follows: Step 1: Normalizing the Pythagorean fuzzy uncertain linguistic variables payoff matrices The payoff-matrices-based Pythagorean fuzzy uncertain linguistic variables is normalized as follows: Step 2: Constructing the group Pythagorean fuzzy uncertain linguistic variables payoff matrix The normalized Pythagorean fuzzy uncertain linguistic variables payoff matrices, where (k = 1, 2, . . ., l) can be transferred into a single Pythagorean fuzzy uncertain linguistic variables payoff matrix R = r ij m×n by utilizing the modified "Pythagorean fuzzy uncertain linguistic prioritized weighted averaging operator" as follows: Safety 2024, 10, 4 where T 1 ij = 1, w k is the importance weight of decision-makers, and Step 3: Computing the importance weighted group Pythagorean fuzzy uncertain linguistic variables payoff matrix In this step, the group Pythagorean fuzzy uncertain linguistic variables payoff matrix R = r ij m×n is converted into the importance weighted group Pythagorean fuzzy uncertain linguistic variables payoff matrix Ŕ = ŕ ij m×n as follows: where w * j (j = 1, 2, . . ., n) is the importance weights obtained by utilizing BWM in Step 1.

Determining the Risk Priority of Failure Modes Utilizing the Zero-Sum Game
Definition 7 [132].A zero-sum game includes two DMs (players) and is formulated into a five-tuple as follows: where 2, 3, . . ., n displays the mixedbased strategy set of DM2, and P denotes the payoff matrix of DM1.Note that DM1 and DM2 are the decision-maker and nature, respectively.Equation ( 16) can also be represented as follows: B = {Failure, Success, FM, RF, P} where FM and RF repressing the set of failure modes and corresponding risk factors, respectively.P denotes the payoff matrix for failure modes.
Definition 8.The payoff matrix is defined based on [133].Consider that c i , for i = 1, 2, 3, . . ., m is a strategy, where c i is the "strategy of failure" and ∼ c j is the "strategy of success".According to this point, for any strategies c i , ∼ c j , assume that P = ij m×n denotes the payoff matrix according to the "strategy of failure".Therefore, the payoff matrix of the "strategy of success" is equal to the −P.
For a zero-sum game, a strategy pair c * i , ∼ c * j will be the Nash Equilibrium point of B = {Failure, Success, FM, RF, P}.According to this point, for a "strategy of failure", we can construct Model 3 as follows: Model 3: For DM2, Model 4 can be shown as follows: Model 4: As mentioned earlier, the Nash Equilibrium strategies of the "strategy of failure" and "strategy of success" are derived by c * i = (c 1 , . . . ,c i , . . . ,c m ) and As a result, the modified strategies' expected values c i (i = 1, 2, 3, . . ., m) are determined as follows: With respect to the G i (G i ̸ = 0), the best strategy solution c i based on the maximum G i is obtained for priority of failure modes in the FMEA procedure.If G i = 0, the row vectors associated with G i ̸ = 0 are disconnected from the payoff matrix, and subsequently, one must return to applying both Model 3 and 4. The remained strategies are similarly treated in order to obtain the failure modes' risk priorities.

Application of Study
The proposed model is implemented in an example healthcare facility in a hypothetical metropolitan area.This hypothetical facility is dedicated to treating patients affected by severe and different health issues, and it faces unique challenges due to a fictitious high incidence rate and a fabricated shortage of medical service staff.Additionally, the facility is assumed to experience a mythical heavy daily patient flow and a growing number of severe cases requiring hospitalization, resulting in a shortage of available beds.The increased workload on fictional medical staff necessitates hypothetical frequent equipment and medical tool healthcare, which, if not managed effectively, could lead to an elevated number of confirmed cases and occupational accidents.Consequently, conducting a risk assessment for such complex hypothetical healthcare facilities is imperative.
Fictitious healthcare units in this study play a crucial role in healthcare settings as they eliminate all microorganisms from equipment and medical tools.This fabricated healthcare process consists of seven steps, including (i) decontamination, (ii) preparation, (iii) packaging, (iv) healthcare, (v) quality control, (vi) storage, and (vii) distribution.The process's unpredictability and lack of structure stem from its reliance on fictional patient feedback.This fabrication process renders various instruments free of microorganisms within units.
The significance of the hypothetical problem at hand can be summarized as follows.This study represents the second iteration in developing the classical FMEA method for evaluating a complex healthcare unit in this hypothetical healthcare system.Managing such teams in this hypothetical scenario is challenging due to the risk of contagion and its high-risk nature.Any risk factor that emerges within this unit holds utmost importance as it can impact all other departments.
For instance, lapses in infection control, such as spreading infection through equipment due to an employee's injury or a fabricated dry cough, can result in conditions persisting on medical tools, patients, and other healthcare staff.This transmission of infection has several negative consequences, including prolonging the duration of patient treatments, endangering patients with new risks, and increasing the number of confirmed cases, ultimately driving up healthcare costs.In the context of the fabricated hospital in our hypothetical study, all materials and equipment are assumed to be sterilized after each intensive care operation for patients with coronavirus disease.
Drawing upon insights from relevant literature [134,135], the collective wisdom of the authors, and the endorsement of healthcare system decision-makers, we have discerned a comprehensive set of 23 failure modes tailored to our specific case study.In adherence to the initial phase of our proposed model, we have diligently amassed all pertinent information pertaining to the healthcare unit within the hospital under scrutiny.This comprehensive compilation encompasses a detailed roster of control activities, causative factors, and the projected consequences associated with each identified failure mode, all of which are thoughtfully presented in Table 1.

Loss of working hours NA
In order to manage the identified failure modes in this study, a heterogeneous group of experts, including four different decision-makers, who have relevant experience and expertise in the health care system and have worked in healthcare units, help manage the identified failure modes in this study.
Therefore, the four decision-makers DM = {DM1, DM2, DM3, DM4} are invited to evaluate the Severity, Occurrence, and Detection of identified failure modes.To achieve more realistic results, the importance weights of each decision-maker need to be obtained based on their individual decisionmakers' quality profile [136][137][138].Decision-makers' critical weight shows how much the final decision is close to her/his opinions.Thus, for our case, the importance weights of four decision-makers are 0.250, 0.275, 0.325, and 0.150, respectively.
In the first step, using BWM, the importance weight of risk factors is obtained.The results based on decision-makers' evaluations are provided in Table 2.As an example, DM2 evaluates the Severity and Detection as the best and worst risk factors, respectively.By utilizing nine scale-based factors, DM2 gives his preference of the best risk factor as Severity over Occurrence and Detection into a best-to-others vector using Equation (1) as RF DM2 BO = (1.000, 4.000, 8.000).In addition, DM2 gives his preference of Severity and Oc- currence over the worst risk factor, Detection, into a worst-to-others vector using Equation (3) as RF DM2 OW = (8.000, 4.000, 1.000).Then, the optimal importance weight of risk factors is determined by solving Model 2, and the results are w * DM2 = (0.718, 0.205, 0.077).By utilizing Equation ( 5), the CR value is obtained as 0.10, meaning that the results of study have a satisfactory consistency.The four decision-makers to evaluate the risk factors (Severity and Occurrence) of 23 failure modes used the set of linguistic terms as Φ = {φ A = Very poor, φ B = Poor , φ C = Slightly poor, φ D = Fair, φ E = Slightly good, φ F = Good, φ G = Very good}, and for risk factor (Detection) as The Pythagorean fuzzy uncertain linguistic variables payoff matrix (k = 1, 2, 3, and 4) provided by four decision-makers are shown in Table A1 in Appendix A. In the first step, the Pythagorean fuzzy uncertain linguistic variables payoff matrix is normalized using Equation ( 12) into , (k = 1, 2, . . ., 4).In the second step, the normalized Pythagorean fuzzy uncertain linguistic variables payoff matrix ∼ P k is aggregated into a single payoff matrix R = r ij 23×3 , as presented in Table 3.
To effectively prevent infection, it is crucial to follow recommended practices such as regular and thorough handwashing, avoiding touching one's face, practicing appropriate respiratory etiquette, and maintaining physical distancing [132].It is important to acknowledge that complete risk elimination may not be feasible for all individuals.
To bolster these primary measures, adopting a multi-pronged approach is beneficial in enhancing safety and reducing the likelihood of transmission:

•
Elevating the cleaning routines, with a special focus on surfaces and tools that undergo frequent handling.

•
Discouraging communal usage of equipment and supplies, thereby diminishing potential sources of contamination.
• Designing a dynamic communication blueprint that adjusts to different risk thresholds, ensuring every employee is adequately informed and aligned with the latest safety protocols.

•
Curating a dedicated mental health support system, addressing the unique stresses and anxieties that may arise during such challenging times.• Amplifying environmental sanitation measures, emphasizing the disinfection of ob- jects and surfaces that are in regular use.

•
Introducing protective installations, like clear Plexiglas barriers, at interaction points to reduce direct contact and safeguard both employees and visitors.

•
Optimally selecting and distributing personal protective equipment (PPE) after a meticulous risk evaluation, ensuring it is utilized effectively and safely.

•
Holding regular training workshops to impart knowledge about the correct methodologies for wearing and removing PPE without risking contamination.

•
Enhancing on-site surveillance and audit mechanisms to ascertain strict adherence to all safety guidelines.

•
Incorporating systematic temperature screenings and health evaluations at facility entrances, serving as preliminary checkpoints.

•
Promoting the use of touchless technologies where possible, such as automatic doors and touch-free payment systems.

•
Regularly updating and reviewing emergency response plans to address potential outbreak scenarios.

•
Encouraging telecommuting and remote work options to reduce the density of people in a confined space.

•
Facilitating virtual meetings and conferences as alternatives to in-person gatherings.

•
Providing well-ventilated spaces and considering upgrading air filtration systems to capture potential viral particles.

•
Educating and encouraging employees to stay home if they feel unwell or exhibit any symptoms.
These measures collectively contribute to a comprehensive approach to infection prevention and control.
Results from real-world evaluations in hospital healthcare units indicate that the prevailing risk assessment methods fall short, resulting in a notable number of accidents and mishaps.Many intervention strategies have primarily targeted failure modes with less critical risk priorities.However, our introduced method proves adept at pinpointing the truly vital failure modes, affirming its foundational logic.Notably, the insights gleaned from this method are not confined to the healthcare units examined.They hold potential for broad application, enhancing safety in myriad healthcare environments grappling with similar issues.By adopting the suggested corrective actions stemming from our findings, healthcare units can progressively lower their risk to a universally accepted or "As Low As Reasonably Practicable" (ALARP) standard.This methodology paves the way for fortified safety protocols in healthcare and comparable sectors.
The Nash Equilibrium strategy in our models, characterized by the state where each player's strategy is optimally chosen against other players' strategies, has been found to prioritize specific failure modes significantly.Particularly, failure modes like F7, F11, F23, F17, and F14 are identified as critical intervention points, as denoted by nonzero values in our Nash Equilibrium strategy profile.These failure modes, which include diverse risks, such as 'increased levels of burning' to 'escalating transmission of infections', are highlighted for immediate attention due to their high impact on system safety and operational continuity.
In juxtaposition with major models in existing literature, our proposed model delineates a more granular and nuanced ranking of failure modes.For instance, while traditional FMEA might prioritize failure modes based solely on RPNs, our model integrates the Nash Equilibrium concept to refine this prioritization, considering the interplay of multiple decision-makers and their strategies.This allows for a dynamic assessment that can adapt to changing risk scenarios, a feature that is particularly pertinent in the wake of global health events such as the pandemic.
Moreover, applying our model extends beyond mere ranking, offering strategic intervention points.This is exemplified by the improved order of F13, which correlates with the enhanced safety protocols in medical guidelines, particularly personal protective equipment (PPE).This response has been critical in managing infection transmission during the pandemic.
In practical terms, our model underscores the importance of proactive and preventive measures, as evidenced by the detailed, actionable strategies presented.These strategies range from heightened cleaning routines to implementing advanced surveillance mechanisms.Incorporating such comprehensive prevention and mitigation strategies reflects a significant leap from the detectability-focused assessments in traditional models.
Empirical evidence from the application of our model in hospital healthcare units reveals that while traditional risk assessments have led to interventions that focus on less critical failure modes, our model adeptly identifies those failure modes that, if unaddressed, could result in severe consequences.This assertive identification aligns with the objective to reduce risks to an ALARP level, ensuring that the safety measures implemented are theoretically sound and practically viable.
In contrasting our results with those derived from prevalent models, it is evident that our methodology offers a significant enhancement in pinpointing the failure modes that necessitate immediate and focused intervention.This comparative advantage is applicable to healthcare units and can be extrapolated to other domains where safety and risk management are paramount.The practical implications of this are profound, as adopting our model can lead to a tangible improvement in safety outcomes and a more rational allocation of resources towards mitigating the most significant risks.This discussion enriches the manuscript by providing a critical evaluation of the proposed model against the backdrop of existing methodologies, thereby elucidating its scientific contribution to the field of risk management.

Methodology Validations
Regarding the study of [139], the three following assessments are considered in the present study to partially validate the introduced decision-making approach: • Assessment 1: To ensure the dependability of a decision-making tool, it is imperative that the agency consistently upholds the superiority of the best alternative.This means that the tool should never replace the top-ranked alternative with one that is ranked lower, unless this substitution is made while considering the relative importance of each criterion's variation.In other words, the tool should prioritize the best option unless there is a compelling reason, based on the specific criteria and their importance, to choose an alternative that is not the highest-ranked overall.• Assessment 2: Reliability in a decision-making tool necessitates adherence to the transitivity property.This property ensures that the tool maintains logical consistency in its decision-making process.If alternative A is preferred over alternative B, and alternative B is preferred over alternative C, then the tool should logically conclude that alternative A is preferred over alternative C.This consistency in decision outcomes is a fundamental characteristic of a reliable tool.• Assessment 3: In a dependable decision-making tool, when a complex decision problem is dissected into smaller components using the same tool for alternative prioritization, the combined prioritization of alternatives at the component level must align with the original prioritization of the undivided decision problem.This means that breaking down the decision into smaller parts should avoid inconsistencies or contradictions in the overall decision.In our particular approach, which involves risk assessments for failure modes, it is important to note that these assessments are interdependent.Therefore, assessment three should be exclusively conducted using our introduced approach for evaluating risk factors to maintain the integrity and consistency of the decision-making process.

Validity Examination of the Proposed Approach Using Assessment 1
To validate the effectiveness of the proposed approach, which combines Game Theory and FMEA within an advanced fuzzy environment, we begin with the first assessment.In this assessment, a non-optimal failure mode (FM.12) is considered the worst and labeled as FM.12*.Let us assume that a group of decision-makers involved in this study provides their input on FM.12*, taking into account the risk factors (S, O, and D).By applying the same computational process outlined in this study, the resulting prioritization remains consistent: F7 > F11 > F23 > F17, with FM.12* still holding the highest priority among failure modes.This outcome underscores that the proposed approach does not alter the selection of the optimal failure mode when substituting a non-optimal one with the worstcase scenario.Consequently, the validity of the proposed approach is affirmed based on the first assessment.These findings extend to other non-optimal failure modes, such as F16, F15, F2, and F22, reinforcing the approach's reliability in these cases as well.

Validity Examination of the Proposed Approach Using Assessments 2 and 3
To validate the methodology introduced in this study, we conducted the second and third assessments by dividing the original set of failure modes in FMEA into four smaller decision-making problems: • {F7, F23, F14, F11, F16, and F17} • {F1, F12, F15, F4, and F3} • {F9, F22, F21, F10, and F5} • {F5, F13, F19, F20, F22, and F8} Following the same computational process outlined in our methodology, we determined the corresponding RPNs for each failure mode by aggregating the prioritizations from the sub-problems.The resulting overall priority ranking is as follows: Importantly, this overall prioritization aligns perfectly with the original prioritization of the undivided set of failure modes.This demonstrates the transitive nature of the decision-making problem.As a result, the introduced methodology is validated and remains consistent under both the second and third assessments.

Sensitivity Analysis
In this section, we conduct sensitivity analysis and a comparative study to demonstrate the effectiveness of the proposed model.Specifically, in our study, the risk factor "Detection" is selected as the expected value by the optimizer.To perform sensitivity analysis, we also consider the other risk factors, namely "Severity" and "Occurrence," as expected values.The ranking of failure modes is determined based on the RPN using Pythagorean fuzzy uncertain linguistic variables, as outlined in Step 2 of our hybrid model.Table 4 presents the rankings of failure modes using each of these strategies.Figure 2 illustrates that certain failure modes, such as F14, F11, and F7, exhibit similar ranking patterns when considering Severity-based and Occurrence-based rankings.However, there is a notable reversal in the ranking pattern when using Linguistic-based rankings.
Nodes: There are 23 nodes, labeled F1 to F23.These likely represent specific factors or features.The central positioning of some nodes (like F1, F2, and F3) might suggest their importance or centrality in the network.

2.
Connection Types: Severity-based (orange): These connections are the most prominent in the figure .Notably, F2 appears to have the most Severity-based connections.
Occurrence-based (gray): These are less prevalent than the Severity-based connections but are still significant.F4 and F5, for instance, seem to have multiple Occurrence-based connections.Linguistic-based (red): These connections are the least common.They primarily involve nodes like F6, F7, F8, and F.

3.
Clusters and Sub-networks: The nodes and their connections can be divided into distinct clusters or sub-networks.For example, F6 to F9 forms a cluster primarily connected by Linguistic-based relations.Similarly, nodes F10 to F15 are interconnected, primarily with Severity-based connections.Analysis: 4.
Central Nodes: F1, F2, and F3 appear to be central nodes given their location and the number of connections.This might indicate their importance in this network or their role as primary or overarching factors.5.
Diversity of Relations: The multiplicity of connection types suggests that the network is examining the relationships between nodes from different perspectives or criteria.The preponderance of Severity-based connections might indicate that the severity of relations or factors plays a dominant role in this context.6.
Peripheral Nodes: Nodes like F16 to F23 are on the periphery, with fewer connections.This could mean they are secondary or less influential factors in this network.7.
Potential Hierarchies: The central nodes' connections to the peripheral nodes might suggest a flow of influence or a hierarchical structure.For example, F2's connections might indicate its influence over multiple other factors.8.
Linguistic Relations: The presence of Linguistic-based connections, especially around F6 to F9, might imply a subset of factors that are related based on language, semantics, or terminologies.Furthermore, we calculate the "Spearman rank correlation coefficient" among each pair of the employed techniques, as shown in Table 5.This coefficient reflects the level of conformity in ranking importance.A higher "Spearman correlation coefficient" indicates a stronger alignment among the ranking methods, highlighting the consistency or divergence in their assessments.As seen in Table 5, the ranking consistency of the proposed Game-Theory-based approach in this paper, when compared to other methods like Severity-based and Occurrencebased, demonstrates a higher degree of alignment.When juxtaposed with Game-Theorybased rankings and the other three methods, it becomes evident that the proposed hybrid model exhibits a comparable performance to Severity-based and Occurrence-based methods in terms of ranking conformity among failure modes.However, notable disparities highlight the superiority of the Linguistic-based method.This is attributed to the unique nature of the Linguistic-based approach, which utilizes raw and unprocessed data, resulting in substantial distinctions in its rankings compared to the other methods.
The "Spearman correlation coefficient" provides an overall assessment of the rankings generated by all methods, and its importance weighting is visualized in Figure 3.As depicted in the figure, when compared to the Linguistic-based method, our proposed Game-Theory-based approach emerges as notably more reliable and applicable.This is underscored by its enhanced capability to effectively identify failure modes within healthcare units.
approach in this paper, when compared to other methods like Severity-based and Occurrence-based, demonstrates a higher degree of alignment.When juxtaposed with Game-Theory-based rankings and the other three methods, it becomes evident that the proposed hybrid model exhibits a comparable performance to Severity-based and Occurrence-based methods in terms of ranking conformity among failure modes.However, notable disparities highlight the superiority of the Linguistic-based method.This is attributed to the unique nature of the Linguistic-based approach, which utilizes raw and unprocessed data, resulting in substantial distinctions in its rankings compared to the other methods.
The "Spearman correlation coefficient" provides an overall assessment of the rankings generated by all methods, and its importance weighting is visualized in Figure 3.As depicted in the figure, when compared to the Linguistic-based method, our proposed Game-Theory-based approach emerges as notably more reliable and applicable.This is underscored by its enhanced capability to effectively identify failure modes within healthcare units.In the graphical representation of our analysis, the y-axis serves as a quantitative scale, representing the aggregation of importance weights assigned to various criteria or elements under consideration.Each weight reflects the relative significance or priority of the element it corresponds to within the overall system or model being assessed.On the other hand, the x-axis delineates the method of assessment, denoting a range of evaluative techniques or measurement approaches employed to gauge the performance or impact of these elements.The intersection of these two axes in the graph forms a coordinate system that allows for a visual interpretation of how different assessment methods correlate with In the graphical representation of our analysis, the y-axis serves as a quantitative scale, representing the aggregation of importance weights assigned to various criteria or elements under consideration.Each weight reflects the relative significance or priority of the element it corresponds to within the overall system or model being assessed.On the other hand, the x-axis delineates the method of assessment, denoting a range of evaluative techniques or measurement approaches employed to gauge the performance or impact of these elements.The intersection of these two axes in the graph forms a coordinate system that allows for a visual interpretation of how different assessment methods correlate with the weighted importance of the system's components, facilitating a clearer understanding of where to focus strategic efforts for optimization or improvement.

Conclusions
The conclusion of this study illuminates the profound challenges faced by healthcare workers and systems amid the pandemic, underscoring the criticality of adept risk assessment and management within healthcare units.To bolster the responsiveness of healthcare systems during such emergencies, our research presents an advanced, Game-Theory-based adaptation of the conventional Failure Mode and Effect Analysis (FMEA) method.
This pioneering hybrid model merges Game Theory with the BWM and enriches it with Pythagorean fuzzy uncertain linguistic variables, thereby overcoming some of the limitations inherent in traditional FMEA.The practical outcomes of our research are significant, exhibiting the model's enhanced capacity to streamline decision-making, furnish reliable risk rankings via optimization algorithms, and offer versatility in various healthcare contexts.As a result, the model contributes to the resilience of healthcare systems, enabling more decisive and accurate strategic choices that directly affect patient care and resource management.
While our Game-Theory-based FMEA stands as both a theoretical construct and a tangible tool for improved decision-making, we must acknowledge certain limitations and the simplifications or assumptions that have been made within our study: • Assumption of Rationality: The model assumes that all decision-makers behave rationally and that their judgments are consistent.This may only sometimes hold in real-world scenarios due to cognitive biases and emotional factors.

•
Complexity and Comprehensibility: The integration of Game Theory and Pythagorean fuzzy logic increases the complexity of the FMEA process, which may require additional training for stakeholders to utilize the model effectively.

•
Data Dependence: The model's effectiveness is highly dependent on the accuracy and completeness of the input data.Any gaps or inaccuracies in the initial data can significantly affect the reliability of the risk assessment outcomes.

•
Static Nature of Analysis: While the model excels in capturing a snapshot of risk factors and their interactions, it may need to fully account for the dynamic nature of healthcare systems where risks can evolve rapidly.

•
Scope of Application: The current implementation of the model is tailored to healthcare systems and may require modifications to be effective in other industries or contexts.

•
Consensus Building: The model presumes a consensus among decision-makers when determining the weights of risk factors, which can be challenging to achieve in practice.

•
Resource Limitations: The application of this advanced FMEA framework demands certain computational resources and expertise, which might only be readily available in some healthcare settings.
Future research must build on these limitations, exploring the flexibility of Game Theory classes for failure mode ranking and delving into a more diverse comparative analysis of linguistic variables to reflect human judgment nuances more faithfully.Moreover, introducing imaginary RPNs could present a more robust approach to evaluating risk levels, potentially transforming the traditional FMEA process.
In conclusion, our enhanced FMEA framework marks a significant stride towards refining healthcare delivery and patient care in the face of pandemics.It equips healthcare establishments to manage the immediate strain and strengthen their preparedness for future adversities.Subsequent research, considering the limitations and assumptions of our current model, can further refine this approach, thereby exerting a substantial and lasting influence on the healthcare sector by evolving strategic risk management into a pivotal instrument for safeguarding lives and enhancing the quality of healthcare services globally.

Safety 2024 , 35 Figure 1 .
Figure 1.The proposed framework for the developed FMEA (referred to as PFULVs, which stands for 'Pythagorean Fuzzy Uncertain Linguistic Variables').

Figure 1 .
Figure 1.The proposed framework for the developed FMEA (referred to as PFULVs, which stands for 'Pythagorean Fuzzy Uncertain Linguistic Variables').

Figure 2 .
Figure 2. Ranking conformity of failure modes based on different scenarios.

Figure 2 .Table 5 .
Figure 2. Ranking conformity of failure modes based on different scenarios.

Figure 3 .
Figure 3.The importance ranking of all different methods.

Figure 3 .
Figure 3.The importance ranking of all different methods.

Table 1 .
The developed FMEA of a complex healthcare unit under the emergency condition.

Table 2 .
The risk factors' importance weight.

Table A3 .
The crisp single-weighted payoff matrix.