The Impact of Patient Infection Rate on Emergency Department Patient Flow: Hybrid Simulation Study in a Norwegian Case

The COVID-19 pandemic put emergency departments all over the world under severe and unprecedented distress. Previous methods of evaluating patient flow impact, such as in-situ simulation, tabletop studies, etc., in a rapidly evolving pandemic are prohibitively impractical, time-consuming, costly, and inflexible. For instance, it is challenging to study the patient flow in the emergency department under different infection rates and get insights using in-situ simulation and tabletop studies. Despite circumventing many of these challenges, the simulation modeling approach and hybrid agent-based modeling stand underutilized. This study investigates the impact of increased patient infection rate on the emergency department patient flow by using a developed hybrid agent-based simulation model. This study reports findings on the patient infection rate in different emergency department patient flow configurations. This study’s results quantify and demonstrate that an increase in patient infection rate will lead to an incremental deterioration of the patient flow metrics average length of stay and crowding within the emergency department, especially if the waiting functions are introduced. Along with other findings, it is concluded that waiting functions, including the waiting zone, make the single average length of stay an ineffective measure as it creates a multinomial distribution of several tendencies.


Introduction
The COVID-19 pandemic has placed a significant burden on the healthcare system since early 2020 and was pronounced by the world health organization (WHO) to constitute a global public health emergency of international concern [1]. Emergency departments had to amend their patient flow policies to concord with the risk of infection and ramifications associated with contracting the virus [2,3]. The emergency department managers had to modify their patient flow policies to deal with the threat of people who may have contracted the SARS-CoV-2, an illness known to be with a high infection rate. In this unprecedented and challenging time, emergency department managers had to change the patient flow operations to adhere to governmental regulations such as social distancing, minimizing personal contact, utilizing personal protective equipment, extra sanitary measures, as well as other organization-specific guidelines [4,5].
The COVID-19 virus, recognized for its rapid mutation rate [6], high infection rate [7], and potential severity in immunocompromised patients [8], has highlighted the importance of proactive threat assessment and analysis, ideally achieving this with maximum flexibility, minimal risk, and cost-effectiveness. The COVID-19 pandemic has renewed focus on patient flow within emergency departments, requiring reconsidering clinical patient management strategies [8][9][10].
Hospital emergency departments represent multifaceted systems characterized by numerous dynamic parameters, which make the task of assessing the comprehensive and direct impacts of patient flow modifications quite challenging [11][12][13]. As a result, the application of low-risk, cost-effective strategies, such as computerized simulation modeling, becomes advantageous to assess the potential effects of interventions aimed at mitigating the repercussions of health crises, including the COVID-19 pandemic. The research literature points to evidence that suggests that computerized simulation models provide a resource-efficient approach to evaluating and predicting the consequences of various interventions and modifications on patient flow within healthcare settings [14][15][16][17][18].
The emergency department patient flow issue has previously been investigated through various approaches in the research literature, such as field experiments, surveys, and simulations. An approach to simulation known as "in-situ" implies healthcare workers physically taking part in the simulation [19]. Thus, this methodology represents a considerable requirement both on time and resources, which is increasingly scarce in the healthcare sector. As a more cost-effective and low-risk approach, computer-based simulation modeling has become a more prevalent and attractive alternative [19,20]. The research disclosed in this paper is of the latter category. To improve overall performance, simulation modeling is increasingly necessary for emergency departments to predict what effects changes such as new layouts, new policies, and technologies will have without actually having to implement them [21].
Salmon et al. examined the computerized simulation models applied to emergency department concerns and identified the typical patterns and evolving trends. The review determined that the discrete-event approach was most prominent when examining the performance of processes, resource capacity, and workforce planning within emergency departments [22]. In addition, this finding was corroborated by Hamza et al. [23], who discovered that few simulation models have probed operational patient throughput. Salmon et al. further highlighted that hybrid simulation modeling, i.e., using a combination of modeling approaches (see Section 2.1, p. 3, for a definition of this term), is limited and takes on a more strategic perspective [20]. Nevertheless, hybrid modeling has tremendous potential regarding operational aspects, as demonstrated by Hamza et al., who combined discreteevent and agent-based modeling to simulate the operational patient throughput [22]. On the same note, Friesen et al. have identified agent-based modeling as a simulation modeling paradigm with 'tremendous potential' while also pointing out the necessity to combine with other paradigms to construct an accurate simulation model [24].
Previous studies investigated the effect of different intervention policies, e.g., the number of treatment rooms and waiting zones, on several performance indicators (average length of stay, crowding, treatment time) at a specific influx rate of patients that have contracted SARS-CoV-2. Therefore, the purpose of this study is to investigate the effect of patient infection rate on the patient flow by seeking answers to the following two research questions (RQs):

1.
What is the likely impact of increased patient infection rate on patient flow parameters in an emergency department? 2.
What is the disparate effect of the increased patient infection rate on different COVID-19 intervention configurations in the emergency department? Figure 1. This study's conceptual scope relation Randers' four-step simulation methodology [26][27][28]. Figure adapted and reused from previous work [25,29].
The model that has undergone the above-introduced conceptualization-and formulation stage is a hybrid simulation model that combines the two paradigms of discrete-event simulation modeling (often referred to as DES) and agent-based simulation modeling (often referred to as ABM). A hybrid simulation model, also called a multimethod model, in this work refers to the combination of two or more computer simulation paradigms, as detailed in the workings of Borshchev [30] as well as the more recent works of Ören et al. [31]. This particular combination of simulation modeling paradigms, DES and ABM, was found to be required to achieve to fully capture the full logic of the pandemic-induced interventions onto the patient flow logic within the case organization. This particular combination of paradigms was shown to provide sufficient versatility to reproduce the patient flow logic of the case emergency department operation, especially regarding the prioritization of patient agent groups and which will be expounded upon in subsequent subsections [32]. This study's conceptual scope relation Randers' four-step simulation methodology [26][27][28]. Figure adapted and reused from previous work [25,29].
The model that has undergone the above-introduced conceptualization-and formulation stage is a hybrid simulation model that combines the two paradigms of discrete-event simulation modeling (often referred to as DES) and agent-based simulation modeling (often referred to as ABM). A hybrid simulation model, also called a multi-method model, in this work refers to the combination of two or more computer simulation paradigms, as detailed in the workings of Borshchev [30] as well as the more recent works of Ören et al. [31]. This particular combination of simulation modeling paradigms, DES and ABM, was found to be required to achieve to fully capture the full logic of the pandemic-induced interventions onto the patient flow logic within the case organization. This particular combination of paradigms was shown to provide sufficient versatility to reproduce the patient flow logic of the case emergency department operation, especially regarding the prioritization of patient agent groups and which will be expounded upon in subsequent subsections [32].

Types of Data Utilized in This Study
The present research disclosed in this paper is based on a computerized simulation model constructed and developed upon three major portions of data as systemized in Figure 2. This figure is reused from previous work [25,29]. The following paragraphs will detail how these data portions were collected, their overall qualities, and how they were implemented to inform this computerized simulation model study.

Types of Data Utilized in this Study
The present research disclosed in this paper is based on a computerized simulation model constructed and developed upon three major portions of data as systemized in Figure 2. This figure is reused from previous work [25,29]. The following paragraphs will detail how these data portions were collected, their overall qualities, and how they were implemented to inform this computerized simulation model study.

Patient Influx Data
The simulation model input uses data from a local database called Meona at the case organization, which, among other types of data not used in this study, keeps a record of the number of patients that have been admitted into the emergency department. These Meona data, of which an illustrated excerpt is shown in Figure 3, are numerical data representing timestamps of arrival for patients; this data is from previous work [29]. Each timestamp represents a patient agent to be inserted into the computerized simulation model. The dataset comes from two selected working days at the case emergency department, which will expound upon some of their characteristics in the later subsection, Section 2.5-Experimental Design. Excerpt from a portion of the numerical data in a spreadsheet used for this study. This data is reused from previous work [29].

Qualitative Data
The simulation model itself has been constructed on the basis of knowledge and insights retrieved from expert stakeholders working daily at the case hospital in which this research is concerned. This means that the qualitative data portion mostly lies within the logic and layout of the computerized model itself and has served as the basis for the simulation model conceptualization and formulation. The qualitative data have been gathered through several interviews, meeting discussions, and by walking through the

Patient Influx Data
The simulation model input uses data from a local database called Meona at the case organization, which, among other types of data not used in this study, keeps a record of the number of patients that have been admitted into the emergency department. These Meona data, of which an illustrated excerpt is shown in Figure 3, are numerical data representing timestamps of arrival for patients; this data is from previous work [29]. Each timestamp represents a patient agent to be inserted into the computerized simulation model. The dataset comes from two selected working days at the case emergency department, which will expound upon some of their characteristics in the later subsection, Section 2.5-Experimental Design.

Types of Data Utilized in this Study
The present research disclosed in this paper is based on a computerized simulation model constructed and developed upon three major portions of data as systemized in Figure 2. This figure is reused from previous work [25,29]. The following paragraphs will detail how these data portions were collected, their overall qualities, and how they were implemented to inform this computerized simulation model study.

Patient Influx Data
The simulation model input uses data from a local database called Meona at the case organization, which, among other types of data not used in this study, keeps a record of the number of patients that have been admitted into the emergency department. These Meona data, of which an illustrated excerpt is shown in Figure 3, are numerical data representing timestamps of arrival for patients; this data is from previous work [29]. Each timestamp represents a patient agent to be inserted into the computerized simulation model. The dataset comes from two selected working days at the case emergency department, which will expound upon some of their characteristics in the later subsection, Section 2.5-Experimental Design. Excerpt from a portion of the numerical data in a spreadsheet used for this study. This data is reused from previous work [29].

Qualitative Data
The simulation model itself has been constructed on the basis of knowledge and insights retrieved from expert stakeholders working daily at the case hospital in which this research is concerned. This means that the qualitative data portion mostly lies within the logic and layout of the computerized model itself and has served as the basis for the simulation model conceptualization and formulation. The qualitative data have been gathered through several interviews, meeting discussions, and by walking through the Excerpt from a portion of the numerical data in a spreadsheet used for this study. This data is reused from previous work [29].

Qualitative Data
The simulation model itself has been constructed on the basis of knowledge and insights retrieved from expert stakeholders working daily at the case hospital in which this research is concerned. This means that the qualitative data portion mostly lies within the logic and layout of the computerized model itself and has served as the basis for the simulation model conceptualization and formulation. The qualitative data have been gathered through several interviews, meeting discussions, and by walking through the patient flow pathway and logic in the case of emergency department localities. The model development process with specific tools used is disclosed, explained, and related to the theoretical underpinnings of the simulation modeling method in previous works [25,32].

Crowding Data
Computational simulation model output, prior to any of the experimental runs, has been cross-checked with actual crowding curves and systems experts for model verification Healthcare 2023, 11,1904 5 of 33 and model validation. This verification is presented in the previous work [32] and also served the purpose of gaining stakeholder trust and confidence in the model working as it indicated the model performance relative to the real system.

Case Emergency Department, Interventions, and Resources
The case emergency department belongs to a large hospital in the southwest part of Norway. Stavanger University Hospital (SUS) covers the specialized healthcare need of the public in an area encompassing a population of 369,000 [33]. The hospital employs 7800 employees in total. Like most hospitals in Norway, SUS is a publicly funded hospital that is steadily and increasingly subject to financial cuts, fiscal tightening to health care budgets, and limited availability to those scarce resources. The hospital's emergency department reports that they admit approximately 35,000 patients on an annual basis [34].
The COVID-19 pandemic in Norway resulted in the case emergency department managers to introduce a package of interventions affecting the patient flow process in the department. These interventions were introduced to lower the risk of cross-infection between patients residing within the emergency department. Mere policy changes enabled some of the interventions, while others were enabled by adding physical resources to the emergency department. Table 1 lists and describes the different resources added to the emergency department to target a lower risk of cross-infection within the emergency department. Figure 4 depicts the case emergency department layout and highlights the different resources with colors; this data is from previous work [25,29]. The resources with dotted lines are the resources implemented due to the onset of the COVID-19 pandemic and are used to allow for policies likely to reduce the risk of intradepartmental cross-infection.
The Case Emergency Department's Patient Flow: Under ordinary circumstances, patients will typically enter the entrance of the emergency department, and once entered, patients will find themselves in a pre-triage. The pre-triage was, at the case organization, introduced as an outdoor installation to the emergency department at the onset of the pandemic. Here, the patient will undergo a patient classification, which here is a specific screening consistent of a questionnaire and antigen test to assess whether the incoming patient should be regarded as infected by COVID-19 (hereinafter denoted to as COVID+) or not infected by COVID-19 (hereinafter denoted to as COVID−). Table 1. Tabulated summary resources aimed to reduce the risk of intradepartmental cross-infection.

Abbreviation
Resource Description

PT
Pre-Triage: The pre-triage is an introduced physical location outside of the emergency department. Admitted patients enter here for the healthcare workers to assess whether the incoming patient should be regarded as infected by COVID-19 (COVID+) or not infected by COVID-19 (COVID−). Depending on the status, the patient will, in the case of COVID−, be directed the regular route via the emergency department reception or, in the case of COVID+, be fast-tracked directly to an available treatment room for proper isolation and treatment of the emergent condition the patient suffers.

E.TR.
Extra Treatment Rooms: Because of the COVID-19 pandemic situation, extra treatment rooms were introduced to be available for the emergency department. These additional treatment rooms were introduced to compensate for the higher occupancy of patients in the emergency department at the onset of the pandemic. In the case of the emergency department, they introduced four extra-treatment rooms.

Abbreviation Resource Description WZ
A designated area was allocated for the precise purpose of enabling the fast-tracking of the patients classified as COVID+. In the instance where a patient who is COVID− has completed their treatment and requires either restitution or non-intensive follow-up, the treatment room may be deemed unnecessary for their further stay in the emergency department. Suppose no further treatment rooms are available for incoming COVID+ patients entering the emergency department. In that case, the COVID− patient may vacate the treatment room to allow for an incoming COVID+ patient to expeditiously receive proper isolation, thus minimizing cross-infection among other patients and healthcare workers within the emergency department.  This data is reused from previous work [25,29].
The Case Emergency Department's Patient Flow: Under ordinary circumstances, patients will typically enter the entrance of the emergency department, and once entered, patients will find themselves in a pre-triage. The pre-triage was, at the case organization, introduced as an outdoor installation to the emergency department at the onset of the pandemic. Here, the patient will undergo a patient classification, which here is a specific screening consistent of a questionnaire and antigen test to assess whether the incoming patient should be regarded as infected by COVID-19 (hereinafter denoted to as COVID+) or not infected by COVID-19 (hereinafter denoted to as COVID−).
From this point, the patient flow pathway forward depends on the above-mentioned patient classification. In the case that the individual patient is to be regarded as COVID+ status or COVID−. For the patient that is found to be COVID−, they will have to enter the waiting room as their second local. Once the patient is registered as arrived in the waiting room, the patient can, if there are sufficient resources available, either get a treatment room-if one is available-or go to the triage if there are no treatment rooms available and continue their waiting time in the triage before getting to enter a treatment room where their treatment finally can start. If patients in the pre-triage are found to be COVID+, they will have to immediately be fast-tracked to a treatment room. This fasttracking is to ensure and reduce the probability of infecting other patients residing within the emergency department. From here, their treatment will start once the proper resources This data is reused from previous work [25,29].
From this point, the patient flow pathway forward depends on the above-mentioned patient classification. In the case that the individual patient is to be regarded as COVID+ status or COVID−. For the patient that is found to be COVID−, they will have to enter the waiting room as their second local. Once the patient is registered as arrived in the waiting room, the patient can, if there are sufficient resources available, either get a treatment room-if one is available-or go to the triage if there are no treatment rooms available and continue their waiting time in the triage before getting to enter a treatment room where their treatment finally can start. If patients in the pre-triage are found to be COVID+, they will have to immediately be fast-tracked to a treatment room. This fast-tracking is to ensure and reduce the probability of infecting other patients residing within the emergency department. From here, their treatment will start once the proper resources are ready.
Depending on how long patients have been receiving treatment, a COVID− patient may have to leave the treatment room to give up the room for a patient categorized as COVID+ that will need the room, for the above-mentioned reason, to obtain prompt isolation. This will only happen if and only if all of the following four conditions are simultaneously true: (1) if all other treatment rooms are occupied, (2) if the patient has received treatment for at least one hour, (3) if the patient in question is the patient that has received treatment for the longest and (4) if the patient in question is classified as COVID−; in such a special case, the patient in question will have to leave the treatment room and continue their stay in the Waiting Zone. Here, the patient in question has to wait until a treatment room again is available in order to proceed with the necessary treatment before proceeding to a discharge and thereby be able to exit the emergency department.
In the opposite case, if the patient does get categorized as COVID+, the patient will go from the pre-triage and be fast-tracked to a treatment room for immediate isolation. Upon arrival to the treatment room, the patient will start receiving the treatment while healthcare workers follow the strict and appropriate regimen to avoid cross-infection between persons. After receiving the treatment, the COVID+ patient will exit the emergency department and be discharged from the hospital.
Three main resources were introduced to the case emergency department. Table 1 below, adapted from previous work [29], describes and summarizes these three resources under focus. These resources were introduced as intervening measures to the emergency department to combat the adverse risks associated with the pandemic situation, which has been included in the computerized simulation developed for the present study.

Patient Flow Performance Indicators
We incorporated several performance indicators into the model to benchmark the performance of the different scenarios in the study. The majority of them were indicators generally used within the emergency department patient flow studies, while others were introduced in this study aimed to capture the specific effects associated with the introduction of the pandemic interventions. Table 2 below lists the performance indicators and briefly describes them. The following paragraph will provide an in-depth explanation of the patient flow performance indicators 'average length of stay' (ALOS) and 'crowding', which are of main interest for the result of this paper.

Patient Flow Performance Indicators at Focus
In this present study, there have been two patient flow performance indicators that have been allotted special attention for investigation of the effect of increased patient infection rate on emergency department patient flow: the average length of stay (ALOS) and the crowding. Table 2. Overview of the selected patient flow performance indicators used in this study.

TTT
Time to Treatment: This performance indicator calculates the patient population's average time within the emergency department before the treatment starts in their assigned treatment room. The value is in this simulation study given on a per-patient basis, i.e., the number of hours of waiting per patient. A lower value is better, all else equal, compared to a higher value, as less waiting means that each patient faster receives health care for the condition they got admitted for. This patient performance indicator is reported for the aggregate total group of patient agents (Tot.), the patients classified with the COVID+ status, and the patients classified with the COVID− status.

Hours per patient
[h/pt]

ALOS
The average length of stay: The performance indicator is calculated, showing the total amount of time spent in the emergency department on a per-patient basis. It tracks the time taken from entering the emergency department, the point of admission, until the patient has been discharged from the emergency department. The performance indicator is measured as an average for all the patient agents, meaning the value gives the average hours of time each patient spent in the emergency department. Similar to the previous patient flow performance indicator, TTT, this performance indicator is also reported for the aggregate total group of patient agents (Tot.), the patients classified with the COVID+ status, and the patients classified with the COVID− status.

Crowding
The patient flow performance indicator called 'crowding' calculates for each and every moment how many patients are present in the emergency department. This patient flow performance indicator is, including its naming, from the case emergency department's patient flow improvements project. In the implementation of this metric, we show how much of the total simulated time the crowding has been beyond three different thresholds of interest as defined by the case emergency department in their internal working emphasis on the improvement of patient flow conditions and response for over-crowding problematics. The three threshold levels are set at 15, 25, and 30 patients in the emergency department.

Peak Crowding
Peak crowding, as its name suggests, is concerned about the peak of crowding, the previous performance indicator. The given number (#) tells how many patients were residing at the emergency department at the point in time when the number was at its maximum that day. In addition, the metric also tracks the time during the day the peak occurs. Thus, the first portion states the number of patients, while the second portion states how much the clock was at that given point.

Time Start Use/Time in Use
This patient flow performance indicator states the time different resources of particular interest start being used during the day. In this performance measure implementation, it has been chosen to track when the extra treatment rooms (E.TR.), triage (Tri.), and waiting zone (WZ) are used. Secondly, the performance indicator calculates how much the given resource is used in proportion to the total elapsed time of that time, e.g., a reading in the Tri.-column at '11:00 means that the triage that day was not used until 11:00. The second value shows the percentage amount of that day this resource has been used at least by one patient.

Time Full
This performance measure, as the name suggests, keeps track of when particular resources of interest, the extra treatment rooms (E.TR.), triage (Tri.), and waiting zone (WZ), are full, meaning they cannot further take any more patients. Similar to the previous performance indicator, this also has dual tracking. Firstly, the time when the given resource is full is given next, a percentage of how long the elapsed time the resource has been full.

Time in WR
This performance measure calculates how long patients on average, spent in the emergency department waiting room. The indicator is calculated as an average across all the patients. h

Times TR blocked for COVID+.
This performance indicator tracks the total number of times (#) a patient agent classified as COVID+ is blocked from entering the treatment room after pre-triage. This is a critical point for the emergency department as it is heavily loaded with the patients so much that they cannot immediately fast-track a patient to an isolated room.

Times TR Assigned
This performance measure tracks the number of times (#) a treatment room (TR) has been assigned to a patient. This refer to the term 'seize' used in simulation modeling discipline; how many times a certain resource is utilized for the purpose treatment of a patient. Each time a patient is assigned to a treatment room is resource-consuming, especially considering the sanitization of the rooms. The average length of stay is perhaps the most widely known and common patient flow performance indicator in the overall literature of patient flow studies. The following equation (Equation (1)), brought from previous workings [32], shows the mathematical logic behind the calculation of ALOS within the computerized simulation model where the patient flow performance indicator is tracked at every timestamp-t in the model run time. ALOS in this work is given in the units of [h/pt], meaning hours per patient. As the equation eludes, at every instance a patient gets discharged from the emergency department, a new patient agent is added to the summarization factor of the equation. ALOS(t) sums all the patient agents staying time (i.e., P Time leaving ED (n) − P Time entering ED (n)) and factors it with the 1/N out (t) to get the average across the given patient agent population.
P Time leaving ED (n) − P Time entering ED (n) ∀ P ∈ [P (0), . . . , P(N out (t))] (1) P(n)-nth patient agent, P x (n)-parameter x of the nth patient agent, N in (t)-number of patients who entered the ED at time t, N out (t)-number of patients left the ED at time t, t-time; acting as the independent discrete time variable going from 00:00:00 to 23:59:59, running in increments of seconds.
The average length of stay is calculated for the aggregate total group of patient agents (Tot.), the patients classified with the COVID+ status, and the patients classified with the COVID− status. Hence the equation within the computerized simulation model is realized in serving the measurement of ALOS for the three different patient agent populations.

Explanation and Model Implementation of Crowding
In this study, the crowding measure was designed to reflect the case organization implemented a framework of hospital-wide patient flow called high patient prevalence under the name 'plan for high activity.' Thus, the operationalization and naming is from the case organization itself. The mentioned patient flow improvement framework pointed out three crucial levels of patient prevalence; 15, 25, and 30. Each of these levels has its specified package of initiatives to alleviate the higher crowding of patients.
In the computerized simulation model, the crowding performance indicator takes the outset in the number of concurrent patients described in the following difference of the number of patients that have been admitted to less the number of patients that have been discharged, giving the following difference at any time-t : N out (t) − N in (t). A simple if-statement keeps track of the difference and yields one if the difference is higher than the three levels mentioned above of 15, 25, and 30 patient agents. Based on this function, denoted by u(t), each timestamp of the model run-time gets summed towards the current value in run-time. The sum is divided by the total run-time and thus yields a number between 0 and 1, which in turn is scaled to a 0-100% scale for presentational purposes.
(2) P(n)-nth patient agent, P x (n)-parameter x of the nth patient agent, N in (t)-number of patients who entered the ED at time t, N out (t)-number of patients left the ED at time t, t-time; acting as the independent discrete time variable going from 00:00:00 to 23:59:59, running in increments of seconds.

Experimental Design and Simulation Model Scenarios
This section aims to show the experimental decisions made during the development of the computerized simulation model utilized in this study. Overall the present study's experimental design closely follows that of a previously published article within this research endeavor [29], except that this study focuses on the patient flow performance with increasing rates of infection among the admitted patients. This means that the different simulation runs vary in one parameter of particular interest: the patient infection rate (PIR). The reported result shows the performance of the emergency department patient flow performance indicators with increased PIR. In contrast to the performance indicators presented in the previous subsection, Section 2.4, the PIR is a constant, meaning it is timeinvariant within each simulation run of the model although is incremented, in the range from 0% to 50%, from one simulation to the next. Below is the equation, Equation (3), describing the operationalization PIR into the simulation model of this study.
PIR-patient infection rate [%], N COVID+ in -number of patient agents inserted to the model assigned with the COVID19+ status, N in Tot. -the total number of patient agents inserted into the model during the total runtime of the simulation, PIR ∈ [0, 5, 50] %denotes an array of values, with the PIR taking a range of values from 0% to 50% with 5 percentage points of increments, yielding 11 simulation runs for each scenario.
This study targets to evaluate the differential outcomes of the patient flow metrics during increased PIR. As we are in here studying the impact of different PIRs on the patient flow in the ED, the PIR parameter plays a central role in this study; we will here go into detail on the underlying mechanism of this parameter in the computerized simulation model. Within the simulation model, PIR is the parameter within the computerized dictating which proportion of patients are found to be COVID+ in the pre-triage. During the run-time of the simulation, patient agents will arrive according to the list of patients that we have described in the previous subsection, Section 2.2-Types of Data Utilized in Study. In the pre-triage, patient agents will be categorized as COVID+ at a rate of PIR. Conversely, patients will be categorized as COVID− at a rate of 1 − PIR. The PIR thus represents the probability for each patient to be categorized as COVID+.
To accomplish the experimental need of this study, we had to go through the simulation model to ensure the scenario ran with the supposed PIR. This was achieved by making the patient agent classification a self-regulating process depending on the scenario's predetermined level of patient infection rate. Suppose the actual patient infection rate is less than the nominal patient infection rate. In that case, the next patient agent incoming the model will be classified as COVID+ and vice versa. This control mechanism thus makes the patient infection rate non-stochastic. Equation (4) illustrates how the logic is programmatically carried out by a simple if-statement on each newly injected patient agent into the simulation model.
PIR-patient infection rate [%], N COVID+ in -number of patient agents inserted to the model assigned with the COVID19+ status., N in Tot. -the total number of patient agents inserted into the model during the total runtime of the simulation, PIR ∈ [0, 5, 50] %denotes an array of values, with the PIR taking a range of values from 0% to 50% with 5 percentage points of increments, yielding 11 simulation runs for each scenario.
Thus, as will be presented in the results, Section 3-Results and Discussion, some of the mentioned patient flow performance indicators will be given distinctly for the group of patient agents that are categorized as SARS-CoV-2 virus infected (COVID+), those who are not categorized as infected (COVID−), and lastly, in order to track the overall patient performance, there is a combined category group as the whole, i.e., combining both of the patient groups. Table 3 summarizes these patient agent groups. Table 3. Overview of the different classes of patient agents in the computerized simulation model.

Abbreviations Model Inputs Description
Tot.
Total patient population: This denotes both the patient groups who are classified as COVID+ and COVID−. This category was made to show the combined effects of the total patient groups.

COVID−
This is the population of patient agents that in the pre-triage is deemed to not be infected by COVID-19. These patients will, after the pre-triage control, progress to the emergency department in the usual manner.

COVID+
This points to the portion of the patient agents that in the pre-triage is found to be infected by COVID-19. As these patient agents pose an increased risk associated with the infection status, the patient agents will be fast-tracked to a treatment room.

PIR
Patient infection rate: The variable determining the proportion of how many COVID+ patients will be inserted in the different simulation runs. In the present study, the PIR is varied between 0 and 50%, with a 5 percentage points increment between each run.
The next issue of importance with regards to the experimental design of the conducted study is the sampled days that the patient influx data are gathered from. There are two days represented in the study. In the study, we look at individual days rather than the average of them; this was an explicit choice made to ensure that we could see the patient flow characteristics of two representative days and not the mere average that possibly could masquerade much insightful information particular to those different days, e.g., depending on the influx profile, a parameter such as a peak crowding (explained in Table 2) would be reduced and thus not give the full picture of how a high-demand day would take a toll on the emergency department resources.
Two representative days were chosen in this study, Day 1-'Average patient influx day', and Day 2-'High patient influx day'. The first day, Day 1, overall is a day where the demand is at a high pace, although not exceeding what is regarded to be normal, leading the emergency department to pass the 30-patient crowding threshold spiking around 2 o'clock with a patient crowding of 37 patients. Although the emergency department this day operates with a generally high crowding, this day is not unusual for the case emergency department. The total amount of patients admitted to the emergency department on this day comes to 104 patients, which is reasonably close to the daily average for this case emergency department at 100 patients. The second day (Day 2) is a day of an even higher pace of patient influx. As discovered in the results chapter for this paper, Section 3-Results and Discussion, there is a mismatch between the capacity and the demand on this day, yielding a higher patient prevalence throughout this day. Patients admitted this day are totaling 121 which, in contrast to Day 1, is substantially above the daily average of 100 admitted patients a day. Figure 5 shows graphical data from previous work [25,29] on the two days of the case emergency department internal performance metric 'crowding', a patient flow performance indicator previously defined in Table 2. paper, Section 3-Results and Discussion, there is a mismatch between the capacity and the demand on this day, yielding a higher patient prevalence throughout this day. Patients admitted this day are totaling 121 which, in contrast to Day 1, is substantially above the daily average of 100 admitted patients a day. Figure 5 shows graphical data from previous work [25,29] on the two days of the case emergency department internal performance metric 'crowding', a patient flow performance indicator previously defined in Table 2.
(a) (b) Figure 5. Graphs of the patient crowding at the emergency department for the two days of this study. The coloring in graph represents the three different Crowding levels as presented in Equation (2)  The last major point of the experimental design of this study is the model resource configurations. At the onset of the pandemic, the emergency department managers added a set of rules for the patient flow as well as two portions of physical resources to the emergency department; a dedicated waiting zone (WZ) and a set of extra treatment rooms (E.TR.). We have divided configurations into four different scenarios, where the first scenario (Sc. 1) constitutes the emergency department resources including those prior to the pandemic situation. Following is the second scenario (Sc. 2), which adds the waiting zone (WZ) to the emergency department (please refer to illustrated resources in the blueprint of Figure 4). Scenario 3 (Sc. 3) does not include the waiting zone but introduces the extra treatment rooms (E.TR.). Lastly, the final scenario (Sc. 4) combines the resources of Sc. 2 and Sc. 3, as this runs with both the waiting zone and the extra treatment rooms (WZ and E.TR.) included in the simulation model. To address the research aim, to study the impact of increased patient infection rate, in this present study, all the aforementioned scenarios are run in series where the patient infection rate (PIR) starts at 0% and increases with 5 percentage points until the maximum patient infection rate of 50%. Table 4 lists and summarizes the four scenarios conducted in the present study.

Model and Resource Configuration PIR [%]
WZ E.TR. Figure 5. Graphs of the patient crowding at the emergency department for the two days of this study. The coloring in graph represents the three different Crowding levels as presented in Equation (2)  The last major point of the experimental design of this study is the model resource configurations. At the onset of the pandemic, the emergency department managers added a set of rules for the patient flow as well as two portions of physical resources to the emergency department; a dedicated waiting zone (WZ) and a set of extra treatment rooms (E.TR.). We have divided configurations into four different scenarios, where the first scenario (Sc. 1) constitutes the emergency department resources including those prior to the pandemic situation. Following is the second scenario (Sc. 2), which adds the waiting zone (WZ) to the emergency department (please refer to illustrated resources in the blueprint of Figure 4). Scenario 3 (Sc. 3) does not include the waiting zone but introduces the extra treatment rooms (E.TR.). Lastly, the final scenario (Sc. 4) combines the resources of Sc. 2 and Sc. 3, as this runs with both the waiting zone and the extra treatment rooms (WZ and E.TR.) included in the simulation model. To address the research aim, to study the impact of increased patient infection rate, in this present study, all the aforementioned scenarios are run in series where the patient infection rate (PIR) starts at 0% and increases with 5 percentage points until the maximum patient infection rate of 50%. Table 4 lists and summarizes the four scenarios conducted in the present study. Adding the waiting zone (WZ): A simulation series runs with a single parameter variation. Peri-pandemic operation with differing proportions of patient infection rates: 0-50% with 5 percentage point increments between each simulation run. This series of scenario runs has the waiting zone (WZ) enabled for utilization when the circumstances are as described in Section 2.4-'Case Emergency Department, Interventions, and Resources' and listed in Table 1. These four scenarios were run with a PIR going from 0% to 50% with a 5pp increment yielding 11 simulation runs for each of the seven configurations, as tabulated in Table 4, with the two different days, Day 1-'Average patient influx day' and Day 2-'High patient influx day', meaning there was ran a total of 154 simulation runs for the results reported in this study. The top PIR limit of 50% was put as a theoretical maximum where any higher patient infection rates very likely would fully alter the approach from the emergency department leaders as this, in practice, would negate most of the utility in segregating the COVID+ and COVID− patients.

Results and Discussions
Due to this present research's inseparable nature of the results and the discussion of the results, this section combines both of these elements in one section. This section reports on three groups of results and discusses these along with their respective presentation. First, the average length of stay (ALOS) results for four scenarios (Sc. 1-Sc. 4, as described in Section 2.5-'Experimental Design and Simulation Model Scenarios' and Table 4) over several PIR during a day of average patient influx (hereinafter denoted to as 'Day 1') and during a day of high patient influx (hereinafter denoted to as 'Day 2') conditions are presented, followed by histogram charts that illustrate the variation in the patient agent length of stay (LOS) to explain the fluctuation phenomenon observed in some of the resulting curves. Second, the crowding results for four scenarios over several PIR under This results and discussion will limit the presentation and hold focus on two of the patient flow performance indicators, which have been detailed in Section 2.4. The full simulation output across all the scenarios showing every patient flow performance indicator implemented into the computerized simulation model is, however, disclosed within the appendix section, Appendix A, of the present manuscript, from Table A1 to Table A4. The graphs presented hereunder in the current discussion are created on the back of the numerical output in mentioned tables. The following paragraphs will provide this output as well as a thorough analysis of the computerized simulation output yielded from running the scenarios from Sc. 1 to Sc. 4.

Results from Running Scenario 1-No Added Resources
Per the experimental design detailed in Section 2.5-'Experimental Design and Simulation Model Scenarios', Scenario 1 (Sc. 1) constitutes the scenario configuration where there are no added resources to the emergency department and no changes other than the introduced patient flow intervention of the prioritization policy, i.e., fast-tracking and isolation, of the COVID+ patients agents in order to reduce the risk of intradepartmental cross-infection. Meaning there is no use of the extra treatment rooms or the waiting zone, which will be tested in the subsequent scenarios.
Initial observation of the scenario output graphs in Figure 6, depicting the differential outcome for the two different patient agent groups, the average length of stay for COVID+ patients (See Figure 6, cell 6.1.1-ALOS-COVID−) is consistently much lower than that of COVID− patients agent in all the different levels of PIR. The difference between the two patient agent groups increases along with the increased level of PIR. The ALOS for COVID+ patients does not increase dramatically with increasing PIR, but ALOS for COVID− patients does increase sharply with increasing PIR. By examining the differences between consecutive values in the ALOS-COVID− patient agents closely, we can observe that the values generally increase as the PIR increases. However, the rate of increase is not consistent throughout the development of PIR, going from 0% to 50%. Initially, in the portion where PIR ranges from 0 to 25%, the development takes on a general convex pattern, meaning that each increment yields an acceleration of ALOS. Towards the end of the curve, PIR at 30% and above, the increase becomes relatively smaller for each increment, and the curve takes into a concave form. The portion PIR at 25% to 30% seems to be a breaking point where the system response breaks from a convex pattern to a concave pattern, yielding a slightly unexpected indent in the graph.
Examining the values for the COVID+ patient agent (See Figure 6, cell 6.1.1-ALOS-COVID+), contrary to the COVID− patient agents, the data here does not show a clear trend or consistent pattern of increase or decrease along with the increments of PIR from 0 to 50%. Instead, the output values for the ALOS-COVID+ vary within a relatively narrow range. There are both portions where the graph increases and decreases, indicating no strong directional movement. By implication for Scenario 1 in Day 1, the throughput for COVID+ patients is consistent, as the patient agent is not stalled from the fast tracking process.
Looking towards the other patient flow performance indicator, crowding ( Figure 6, cell 6.1.2), the results here reflect a similar trend to that of ALOS-COVID−, where we can observe that crowding also increases along with an increased level of PIR. Due to the lower patient influx, we see that the 30% threshold, crowding 30, is not crossed until the patient infection rate hits a level of 40%, and from that point, it is only at very moderate percentages hitting the 2.429% at PIR = 50. Although the crowding 30 represents a critical level of crowding, the output values suggest that this threshold is only reached at very brief moments during the simulated day. The crowding 25 goes from 20.175% at the PIR = 0% to 40.688% at PIR = 50% with no particular pattern other than what is approximately a linear development. The gradual increase corresponds to a 2.05 percent point increase in crowding 25 for each 5 percentage point increment of PIR.

Results from Running Scenario 2-Added Waiting Zone
Scenario 2 is the series of scenarios where a waiting zone (WZ) is added to the emergency department. In the patient flow operation, the waiting zone will be used under certain circumstances; all the treatment rooms must be occupied, and another COVID+ patient agent must arrive at the emergency department. In this case, the COVID− patient that has spent most of the time in the treatment room but no less than 60 min will be released from the treatment room and go to the waiting zone. In contrast, the incoming COVID+ patient at the pre-screening at the pre-triage will be fast-tracked to the treatment room in order to avoid or minimize the risk of cross-infection with other patients or healthcare workers within the emergency department.
Looking at the output graph showing the ALOS for Scenario 2 Day 1 (Figure 7, cell 7.1.1), we observe that the curve in the ALOS-COVID− has the same outset (both start exactly at 2.7 h) at a 0% patient infection rate as that of Scenario 1 (Figure 6, cell 6.1.1). However, the curve in Scenario 2 is steeper, meaning that for each increment (5 percentage points increase in patient infection rate), the average length of stay for the COVID− patient agents (ALOS-COVID−) increases more. This trajectory indicates that the setup The graphs in the lower half of the second row of Figure 6 show the output from the second day, Day 2, which is, as previously presented, a day of higher patient agent influx to the emergency department. It is apparent that the emergency department is at a far higher occupancy and utilization than on the first day, Day 1. Overall, the graphs show a reduction in the linear development than that of Day 1, which makes sense that the higher patient influx days gives higher variability in patient flow performance.
Looking closer at the data, the average length of stay for the COVID− patient population ( Figure 6, cell 6.2.1 ALOS-COVID−) ranges from a 3.104 h at a patient infection rate of 0%, and on the other extreme, at 50%, it reaches 4.465 h, which means that most patients spend a lot more time within the treatment room at the highest patient infection rate. The convex trajectory observed in the Day 1 is not present for the graph for Day 2. The output is overall more volatile and unpredictable.
The second graph for Day 2 ( Figure 6, cell 6.2.2) shows the 'crowding' (see Section 2.4 for the definition) for the Day 1 of high patient influx. On the contrary, from Day 1 to Day 2, a far more high-pressure situation occurs in the emergency department. The 30% crowding is at the onset of 0% patient infection rate at 27%, which is a high-pressure situation straining the emergency department. With the increased patient infection rate, the 30% crowding approximately plateaus at 30%, which appears to be a saturation point for the patient infection rate on the crowding outcome hitting 37-38% on 30% crowding from this point and onwards.
What is striking by a closer examination of the graphs from Day 1 with the graphs from Day 2, the first and second row in Figure 6, respectively, is the deviancies from a linear development in the output. For Day 1, the deviances are so small that they, under most circumstances, could be written off as deviances within a reasonable margin of error. The deviances on Day 2 were far more pronounced. This could point towards a tendency for the patient flow process in the emergency department turns more non-linear with increased predictability when the patient influx and PIR increase. All in all, by running the first Sc. 1, we see what difference there is between the COVID− patient agent influx and the high patient influx. We see that increased PIR has amplified this difference, especially when the PIR is at the higher end of the scale. Scenario 1 showed that during a day with a moderate influx of patients, there is only at a 45% patient infection rate level that the emergency department reaches a critical crowding that needs the highest level of attention. What is worth noting is that Scenario 1 is where no resourcing initiatives are being taken to alleviate the situation. That will be investigated in the subsequent scenarios, Sc. 2-Sc. 4, presented in the following subsections.

Results from Running Scenario 2-Added Waiting Zone
Scenario 2 is the series of scenarios where a waiting zone (WZ) is added to the emergency department. In the patient flow operation, the waiting zone will be used under certain circumstances; all the treatment rooms must be occupied, and another COVID+ patient agent must arrive at the emergency department. In this case, the COVID− patient that has spent most of the time in the treatment room but no less than 60 min will be released from the treatment room and go to the waiting zone. In contrast, the incoming COVID+ patient at the pre-screening at the pre-triage will be fast-tracked to the treatment room in order to avoid or minimize the risk of cross-infection with other patients or healthcare workers within the emergency department.
Looking at the output graph showing the ALOS for Scenario 2 Day 1 (Figure 7, cell 7.1.1), we observe that the curve in the ALOS-COVID− has the same outset (both start exactly at 2.7 h) at a 0% patient infection rate as that of Scenario 1 ( Figure 6, cell 6.1.1). However, the curve in Scenario 2 is steeper, meaning that for each increment (5 percentage points increase in patient infection rate), the average length of stay for the COVID− patient agents (ALOS-COVID−) increases more. This trajectory indicates that the setup constituted by the resource configuration in Scenario 2 is more sensitive to the increased patient infection rate. Overall, the average length of stay for COVID+ patient agents is still much lower than for COVID− patient agents and is also lower than ALOS for COVID+ patients in Scenario 1, Day 1 ( Figure 6, cell 6.1.1). However, the improvement for COVID+ patients comes at the expense of COVID− patients. Their ALOS is higher than that of Scenario 1, Day 1, and the disadvantage for COVID− patients relative to COVID+ patients increases with increasing PIR.

Results from Running Scenario 3-Added Extra Treatment Rooms
Scenario 3 corresponds to the resource configuration where there are added extra treatment rooms to the emergency department. The patient flow logic will match that of Scenario 1 as no changes are performed to the patient flow logic, just the fact that the emergency department now contains four more treatment rooms, which will be start used once the regular treatment rooms are fully utilized; thus, the total number of treatment rooms in this series of the simulation run is 17 in total.
When looking at Day 1 for ALOS ( Figure 8, cell 8.1.1), the results are very favorable in this scenario for both the COVID− and COVID+ patient groups. ALOS for COVID− patients registers even lower than for COVID+ patients within the full scale of patient infection rate from 5% to 50%. This is contrary to what has been the previous results yielded in both Scenario 1 and Scenario 2. The most remarkable feature of this scenario is the flatness of the curves: ALOS for both patient groups remains at a stable low level, even at increasing levels of PIR. Hence, this intervention seems effective in that the ALOS is far less sensitive to increases in patient infection rates. When observing the outcome of the crowding performance indicator (See Figure 7, cell 7.1.2), the average length of stay starts at the same point as the previous scenario, Scenario 1, at 0%, 20.2%, and 42.5% for 30%-, 25%-, and 15%-crowding, respectively, for the 0% patient infection rate. This difference has a simple explanation: when the PIR is at 0%, there will not be any patients entering the emergency department that qualifies for the usage of the waiting zone to be utilized; thus, in the case of 0%-PIR, these two scenarios will have the same outcome. However, if we take a closer look at the advancement of patient infection rate, the development, similarly to the ALOS Sc. 1 vs. Sc. 2, is steeper, although we do not see the patient performance indicator plateauing here. In fact, the 15%-and 25%-crowding increases along with the PIR for all the increments of PIR from 0% to 50% patient infection rate. Here, we also see, as in scenario 1 for Day 2, that the 30%-crowding does not appear before the patient infection rate is 40%.
Turning towards the second row of Figure 7 and analyzing the outcome of the high patient influx from Day 2, we unsurprisingly see more stress in the emergency department overall. The average length of stay (Figure 7, cell 7.2.1) is, in this scenario, more unstable as it, even more than for Scenario 1, deviates from a linear development across the 0% to 50% development. In fact, the scale of this graph had to be changed in order to capture the final ALOS for COVID− patients at a 50% patient infection rate, reaching a 5.25 h ALOS-COVID− at the 50% patient infection rate, illustrating the increased burden onto the COVID− patients, whereas the COVID+ patients have a very stable trajectory throughout the increase of PIR from 0% to 50%.
Looking at the crowding patient flow parameter for Day 2, the difference is pronounced when compared to the graph for Day 1. However, more interestingly, the impact is not as drastic as on Day 2 of Scenario 1. Scenario 2 has a peak in crowding at 30% of 39.4% while the same peak is 37.9% attesting to a lesser degree of impact from the added waiting zone compared to the average stay length.
In summary, running Scenario 2 has shown the configuration of resources with the added waiting zone. By examining these two selected patient flow performance indicators, the waiting zone may appear as a net burden for the emergency department patient flow by seriously disaffecting the COVID− patients with a drastically increased average length of stay. Much of this increase naturally is explained by the waiting zone that COVID− patients had to wait in, adding time to their overall stay time. However, even though this initiative has a net reduction of patient flow performance indicators, the aim of implementing the waiting zone is to accomplish a strictly needed service of yielding the possibility of swift fast-tracking for COVID+ patients. It is quite interesting to observe the disparate sensitivity of the two parameters ALOS and crowding, where crowding is less affected by the new configuration constituted by Scenario 2. The average length of stay has a much larger impact, pointing out that the individual patient impact caused by the change is much larger than when looking at the whole department.

Results from Running Scenario 3-Added Extra Treatment Rooms
Scenario 3 corresponds to the resource configuration where there are added extra treatment rooms to the emergency department. The patient flow logic will match that of Scenario 1 as no changes are performed to the patient flow logic, just the fact that the emergency department now contains four more treatment rooms, which will be start used once the regular treatment rooms are fully utilized; thus, the total number of treatment rooms in this series of the simulation run is 17 in total.
When looking at Day 1 for ALOS ( Figure 8, cell 8.1.1), the results are very favorable in this scenario for both the COVID− and COVID+ patient groups. ALOS for COVID− patients registers even lower than for COVID+ patients within the full scale of patient infection rate from 5% to 50%. This is contrary to what has been the previous results yielded in both Scenario 1 and Scenario 2. The most remarkable feature of this scenario is the flatness of the curves: ALOS for both patient groups remains at a stable low level, even at increasing levels of PIR. Hence, this intervention seems effective in that the ALOS is far less sensitive to increases in patient infection rates.
The same can be said for the crowding indicator as this is also much lower in this scenario compared to the two previous ones, Scenario 1 and Scenario 2. The sensitivity towards the patient infection rate is lower for crowding 15 and crowding 20. Even more notably, the patient flow indicator crowding 30 here is completely flat at 0% at every rate from 0% to 50% of patient infection rates, meaning a much more favorable situation for the emergency department.
of the emergency department, it will lead to more chaotic, non-linear behavior, higher levels of crowding, and a higher level of ALOS for COVID− patients, rising with increasing PIR. However, even on the day of high patient influx (Day 2), in Scenario 3 (introducing extra waiting rooms), ALOS levels for COVID− patients are still lower than in Scenarios 1 (no added resources) or Scenario 2 (waiting zone) at comparable PIR levels. In addition, even with the increased patient influx as on Day 2, ALOS for COVID+ patients remains at a fairly stable level even with increasing PIR.

Results from Running Scenario 4-Added Waiting Zone and Extra Treatment
Scenario 4 denotes the series of simulation runs, which combines the efforts from Scenario 2 with the introduced waiting zone and Scenario 3 with the introduced extra treatment rooms in order to measure the combined effect of those two interventions. Thus, the patient flow logic is a combination of that of Scenario 1 and Scenario 2.
By running the series of simulation runs, we see that the average length of stay during the day of moderate patient influx (Figure 9, cell 9.1.1), for the COVID+ patient, is even more stabilized but overall exerts only minor changes from that of Scenario 2 and 3. While for COVID− patients, the average length of stay exerts a minor increase compared to Scenario 3. However, the change only to be visually recognizable in the higher patient infection rates of 30% and above. This, of course, is explained by the fact that the waiting zone for COVID− patients constitutes an additional delay point for certain subsets of patients.
Referring to the simulation output for crowding in for Day 1 in Scenario 4 ( Figure 9, cell 9.1.2), we see a graph that, for all intents and purposes, is identical to that for Day 1 Scenario 3 (Figure 8, cell 8.1.2). Thus, by implication, compared to Scenario 2 (Figure 7,  cell 7.1.2), we see the same overall improvements compared to the base case Scenario 1.
More interesting is to observe the effect of the combined contribution of the waiting zone and the extra treatment rooms when looking at a higher intensity, Day 2 ( Figure 8, Another observation to point out in Scenario 3, Day 2 ( Figure 8, cell 8.2.1) is that ALOS for COVID+ is remarkably stable on a fairly low level even with an increase of PIR (as in Day 1-'Average patient influx day'). For COVID− patients, however, ALOS is higher than on the day of average patient influx (Figure 8, cell 8.2.1) and upholds the regular pattern found in the other outputs that it rises along with increasing patient infection rate. Still, ALOS, even for COVID− patients, remains lower than in Scenarios 1 and 2 for the higher patient influx day (Day 2).
Crowding is also higher on the day of high patient influx for Scenario 3 (Figure 8, cell 8.2.2) compared to Day 1, and we see here that the patient infection rates over 5% make the emergency department reach the critical crowding 30 threshold, and at higher patient infection rates, it increases and results on the final value of 17.5% at a 50% patient infection rate, which is characterized as a high-intensity day but far lesser than the level observed in the previous scenario.
A possible explanation for this observed difference between Day 1 and Day 2 might be found in the fact that the data representing the influx of patients on Day 2 is higher (20%) than on Day 1. A way to understand the results can be that if the influx (as in Day 1) is within the available capacity of the emergency department that day (including the added four extra treatment rooms), then the ED is able to manage the patient flow well with low levels of crowding (as seen in Figure 5, crowding in Sc. 1, Day 1), even at high levels of PIR. However, if the level of patient influx (as on Day 2) goes over the capacity of the emergency department, it will lead to more chaotic, non-linear behavior, higher levels of crowding, and a higher level of ALOS for COVID− patients, rising with increasing PIR. However, even on the day of high patient influx (Day 2), in Scenario 3 (introducing extra waiting rooms), ALOS levels for COVID− patients are still lower than in Scenarios 1 (no added resources) or Scenario 2 (waiting zone) at comparable PIR levels. In addition, even with the increased patient influx as on Day 2, ALOS for COVID+ patients remains at a fairly stable level even with increasing PIR.

Results from Running Scenario 4-Added Waiting Zone and Extra Treatment
Scenario 4 denotes the series of simulation runs, which combines the efforts from Scenario 2 with the introduced waiting zone and Scenario 3 with the introduced extra treatment rooms in order to measure the combined effect of those two interventions. Thus, the patient flow logic is a combination of that of Scenario 1 and Scenario 2.
By running the series of simulation runs, we see that the average length of stay during the day of moderate patient influx (Figure 9, cell 9.1.1), for the COVID+ patient, is even more stabilized but overall exerts only minor changes from that of Scenario 2 and 3. While for COVID− patients, the average length of stay exerts a minor increase compared to Scenario 3. However, the change only to be visually recognizable in the higher patient infection rates of 30% and above. This, of course, is explained by the fact that the waiting zone for COVID− patients constitutes an additional delay point for certain subsets of patients.
Referring to the simulation output for crowding in for Day 1 in Scenario 4 ( Figure 9, cell 9.1.2), we see a graph that, for all intents and purposes, is identical to that for Day 1 Scenario 3 (Figure 8, cell 8.1.2). Thus, by implication, compared to Scenario 2 (Figure 7, cell 7.1.2), we see the same overall improvements compared to the base case Scenario 1.
More interesting is to observe the effect of the combined contribution of the waiting zone and the extra treatment rooms when looking at a higher intensity, Day 2 ( Figure 8, cell 8.2.1). Taking a look at the average length of stay, we see that compared the scenario 3 ( Figure 7, cell 7.2.1), the ALOS-COVID+ has gotten stabilized onto its theoretic minima, which is great. This means that no COVID+ patients had to wait before getting fast-tracked to an available treatment room. The other side of this coin, however, is that we see this comes at some cost of an increased average length of stay for the COVID− patients, which has a steep trajectory across the increased patient infection rates.
Throughout the runs of Scenario 4 on Day 2, we again see that the crowding holds equal to Scenario 3; here, if we consider the fact that the emergency department overall can store six more patients when taking into account the full capacity of the waiting zone, this means that a slight increase compared is expected.
Seeing the result from Scenario 4 and comparing it to the previous result from Scenario 3, one may, by looking at Day 1 isolated, conclude that the patient flow operation is indistinguishable from each other in terms of the average length of stay and crowding across the variety of patient infection rates. However, this holds true because we are looking at a configuration of the emergency department that is well-balanced with the demand of this given day with the lower patient influx. What also needs to be taken into consideration is when comparing to Scenario 3, even though the parameters develop almost identically across the increase of patient infection rate, one must carry in mind that the emergency department can hold six more patients in productive position meaning that the equal output means that Scenario 4 yields a better score than Scenario 3. the equal output means that Scenario 4 yields a better score than Scenario 3.
Upon evaluating Day 2, which experienced a greater patient influx, subtle overall changes were observed. The emergency department demonstrated an improved capacity to manage the heightened patient volume, albeit at the expense of COVID− patients. A significant advantage of this scenario is the stable throughput of COVID+ patients, a substantial improvement given the potential severity of cross-infection within emergency department settings.   Tables A1-A4. Upon evaluating Day 2, which experienced a greater patient influx, subtle overall changes were observed. The emergency department demonstrated an improved capacity to manage the heightened patient volume, albeit at the expense of COVID− patients. A significant advantage of this scenario is the stable throughput of COVID+ patients, a substantial improvement given the potential severity of cross-infection within emergency department settings.
Discussing the Observed Non-Linearities in Sc. [1][2][3][4] In Figures 7-9, the ALOS curves for Scenario 1-4 (particularly cells 6.1.1, 6.2.1, 7.2.1, 8.2.1, and 9.2.1) show similar overall trends as discussed in the conditions of average patient influx (Day 1), but with a non-linear behavior (fluctuation where ALOS for PIR = 15% is lower than ALOS for PIR = 10%). The average length of stay for COVID− patients at PIR% for Scenario 2 on Day 2 is about 3.5 h. However, it is 3.4 h at PIR = 15%, where we should expect higher rather than lower ALOS. Let us look into the histogram of the length of stay and observe why the average value has such fluctuation. Histograms are an effective tool for identifying the shape of the distribution. The shape of the distribution is a fundamental characteristic of the sample that can determine which measure of central tendency best reflects the center of the data. In Figure 10, the histograms for Scenario 2 ( Figure 10, cell 10.1.1 and cell 10.1.2) show multinomial distribution, where there are more than two outcomes. For example, in our case, some patient agents stay 10,000 s, others stay 12,000 s, others stay 15,000 s, and others stay 20,000 s. Thus, taking the average for a multinomial histogram is ineffective for a central tendency.
effective tool for identifying the shape of the distribution. The shape of the distribution is a fundamental characteristic of the sample that can determine which measure of central tendency best reflects the center of the data. In Figure 10, the histograms for Scenario 2 ( Figure 10, cell 10.1.1 and cell 10.1.2) show multinomial distribution, where there are more than two outcomes. For example, in our case, some patient agents stay 10,000 s, others stay 12,000 s, others stay 15,000 s, and others stay 20,000 s. Thus, taking the average for a multinomial histogram is ineffective for a central tendency.
Understandably, COVID− patients (represented with the green bars in the histogram) might have a different average length of stay, as some COVID− patients stay in both treatment rooms and the waiting zone, and some wait only in treatment rooms. In Scenario 3 (Figure 10, cell 10.2.1 and cell 10.2.2), where more treatment rooms were utilized, the length of stay histograms have a better distribution (almost binominal) as offering more rooms reduces the stay time in the waiting zone. Thus, taking the average in Scenario 3 is more effective in representing the central tendency and tracking the change over several PIR-rates. Our findings above confirm the findings in a previous study [29] that the intervention of implementing the waiting zone is the most effective for COVID+ patients, but may not be the most effective intervention in total, given that it negatively affects the patient flow of COVID− patients. What we now also see in the current study is that this negative effect Understandably, COVID− patients (represented with the green bars in the histogram) might have a different average length of stay, as some COVID− patients stay in both treatment rooms and the waiting zone, and some wait only in treatment rooms. In Scenario 3 ( Figure 10, cell 10.2.1 and cell 10.2.2), where more treatment rooms were utilized, the length of stay histograms have a better distribution (almost binominal) as offering more rooms reduces the stay time in the waiting zone. Thus, taking the average in Scenario 3 is more effective in representing the central tendency and tracking the change over several PIR-rates.
Our findings above confirm the findings in a previous study [29] that the intervention of implementing the waiting zone is the most effective for COVID+ patients, but may not be the most effective intervention in total, given that it negatively affects the patient flow of COVID− patients. What we now also see in the current study is that this negative effect worsens with increasing PIR. The introduction of extra treatment rooms, however, leads to a beneficial effect both for COVID+ and COVID− patients, as we saw [29], but the new finding in the current study is that this beneficial effect is less dependent on PIR-provided that the patient influx stays within the capacity of the emergency department. Thus far, the intervention of "extra treatment rooms" we had introduced in our simulation was set at four extra treatment rooms. As our simulation results above show, four extra treatment rooms were sufficient to create an ED setup where ALOS was fairly insensitive to increasing PIR on a day with average patient influx (Day 1), but not on a day of high patient influx (Day 2).

The Effect of Extra Treatment Rooms on the ALOS and Crowding
A relevant question that thus arises is: How many extra treatment rooms does one need to introduce to obtain the best benefit of this intervention? To pursue this question, we decided to run extra sets of simulations run on our input data set of Day 1 and Day 2 with a varying amount of extra treatment rooms as an intervention. We, therefore, ran additional simulations of Scenario 3 above-for both Day 1 and Day 2-but now varying from two extra treatment rooms to six extra treatment rooms. The following scenarios will be denoted Scenario 3.2 (two extra treatment rooms), Scenario 3.4 (4 extra treatment rooms), and Scenario 3.6 (and extra treatment rooms). We see that in the emergency department, when the resources and demand are matched as on Day 1, most of the PIRs, at least up to 35%, do not yield a critical crowding level. On the high patient influx day, as in Day 2, the picture is quite different as the crowding is high within the whole range of PIRs.
To investigate deeper upon the notable effects of the introduced extra treatment rooms, we chose to make a more granular analysis by making a series of scenarios with different amounts of extra treatment rooms. Scenarios here are denoted 3.X as they all constitute a variation of Scenario 3 (please refer to Table 4 for the scenario design where this is also explained), which is the scenario in which we introduced the extra treatment room. Scenarios 3.2, 3.4, and 3.6 test the patient flow performance in the emergency department with two, four, and six extra treatment rooms. It is worth noting that Scenario 3.4 is the same as the original Scenario 3 as it has four extra treatment rooms and the same resource configuration.

Patient Flows at an Average Patient Influx Day (Day 1)
At two extra treatment rooms, Scenario 3.2, Day 1 showed less favorable curves ( Figure 11, cell 11.1.1), with ALOS for COVID− patients being a bit higher and climbing with increasing PIR at a steeper rate-as opposed to the flatter curves for scenario 3.4 (4 extra treatment rooms) (Figure 11, cell 11.1.2). In addition, crowding was at an overall higher level with two ( Figure 11 The amount of extra treatment rooms one needs to introduce to obtain the desired beneficial effect on patient flow may thus depend on the level of the ED's daily patient influx. On the average patient influx day (Day 1) in our simulation, this effect was obtained when four extra treatment rooms were introduced. A further benefit was achieved with six extra treatment rooms. On the day of high patient influx (Day 2), six extra treatment rooms were needed to achieve a beneficial effect on ALOS and crowding comparable to that with four extra treatment rooms on a day of average patient influx (Day 1).
In addition, this set of curves indicates that when the patient influx is higher than the ED's capacity to manage it, the patient flow performance indicators become elevated, tend to increase with increasing PIR and exhibit non-linear behavior. The degree of non-linear behavior diminishes when the amount of treatment rooms becomes adequate to handle the patient influx (the curves on the right-hand side of Figures 11 and 12  The amount of extra treatment rooms one needs to introduce to obtain the desired beneficial effect on patient flow may thus depend on the level of the ED's daily patient influx. On the average patient influx day (Day 1) in our simulation, this effect was obtained when four extra treatment rooms were introduced. A further benefit was achieved with six extra treatment rooms. On the day of high patient influx (Day 2), six extra treatment rooms were needed to achieve a beneficial effect on ALOS and crowding comparable to that with four extra treatment rooms on a day of average patient influx (Day 1).
In addition, this set of curves indicates that when the patient influx is higher than the ED's capacity to manage it, the patient flow performance indicators become elevated, tend to increase with increasing PIR and exhibit non-linear behavior. The degree of non-linear behavior diminishes when the amount of treatment rooms becomes adequate to handle

Discussion of Collateral Impacts of the COVID-19 Pandemic
Previous studies have shown that the COVID-19 pandemic had a significant adverse collateral impact on the operation of emergency departments. For example, Cho et al., in the case of Korean emergency department patients, identified collateral impact on the emergency department patients in the form of increased in-house mortality during the pandemic [35]. In their conclusions, they recommend that further investigation should be performed to analyze the causes of the increased fatality. Furthermore, Cummins et al., in the case of an Irish emergency department, also found that the emergency department's patient flow performance was adversely impacted by the COVID-19 pandemic [36]. In a similar respect, Cummins et al. also identified that there is an overall need for planning and preparedness to address such collateral impact from pandemics.
Furthermore, both research groups in the articles mentioned above pointed out a direct, second collateral impact on patient admissions to the emergency department. For example, Cummins et al. quantified a substantial influence attributed to the pandemic on a patients' decision to seek and attend the emergency department. During the COVID-19 pandemic, despite being in need, patients were found to be less willing to seek and attend the emergency department, leading to a negative impact on fatality rates.

Discussion of Collateral Impacts of the COVID-19 Pandemic
Previous studies have shown that the COVID-19 pandemic had a significant adverse collateral impact on the operation of emergency departments. For example, Cho et al., in the case of Korean emergency department patients, identified collateral impact on the emergency department patients in the form of increased in-house mortality during the pandemic [35]. In their conclusions, they recommend that further investigation should be performed to analyze the causes of the increased fatality. Furthermore, Cummins et al., in the case of an Irish emergency department, also found that the emergency department's patient flow performance was adversely impacted by the COVID-19 pandemic [36]. In a similar respect, Cummins et al. also identified that there is an overall need for planning and preparedness to address such collateral impact from pandemics.
Furthermore, both research groups in the articles mentioned above pointed out a direct, second collateral impact on patient admissions to the emergency department. For example, Cummins et al. quantified a substantial influence attributed to the pandemic on a patients' decision to seek and attend the emergency department. During the COVID-19 Although this study does not directly investigate such impacts on emergency department patient admission itself, the model does take into account a reduction in emergency department patient admissions. As pointed out in Section 2.5-'Experimental Design and Simulation Model Scenarios', the number of COVID− patients does decrease as the number of COVID+ patients increases. Hence, the model incorporated this collateral mentioned above effect caused by the COVID-19 pandemic.
Moreover, our findings show that interventions have a beneficial effect on the patient flow performance of COVID+ patients but at the expense of patient flow performance for COVID− patients. Specifically, the results show that COVID− patients in all scenarios will experience a higher average length of stay (ALOS), which is one of the many markers of patient dissatisfaction. In addition, as pointed out previously in the individual scenario discussions, the results showed that the patient flow performance measure 'crowding' in all scenarios significantly worsens with the increased patient infection rate (PIR). Please refer to subsections Sections 3.1-3.4 of this article for exact quantities and research findings on the disparate scenario configurations and patient infection rates. Overall, higher crowding is associated with a higher strain on the emergency department resources leading to secondary collateral damages associated with high utilization of the emergency department, such as harder prioritization of the patient treatments, higher stress among the healthcare workers, longer time to diagnosis on patients, which may cause worsening in conditions, and many more [37].
Hence, on the results and findings in the present article and because of the explanation given above, we believe our computerized simulation model and this study is relevant and contributes to each of the above-listed collateral impacts on patient flow in the emergency department. Thus, we believe when applied on-site, along with findings such as those above-presented findings, the computerized simulation model can be helpful for practitioners to plan what interventions to introduce to the emergency department in order to identify, address, and reduce such adverse collateral impact within their emergency departments.

Limitations
Due to practicality, this study does not consider every possible patient flow performance indicator. Instead, it delimits its focus on the select subset of the performance indicators. Performance indicators that are considered to be of interest may differ among organizations; hence these particular patient flow performance indicators may be viewed as irrelevant or not viewed to be fitting for other emergency departments.
This simulation modeling study does not take into account the cost of each of the interventions. Even though we see a far better improvement in introducing the extra treatment rooms for all rates of patient infection rates, the model does not take into account the increased cost of doing so. It is likely that this trend would be amplified with an even higher amount of extra treatment rooms. At the final stage of analysis, parameters should be taken into consideration.

Suggestions for Further Research Work
A model of the type used in this study may be used in conjunction with a predictive model of patient inflow, such as the model described in the workings of Skinner et al. [38]. Assuming that the given health authorities track the given PIR of the hospital regions, that result could be combined with the model used in this present study and then lead to a tailored response to that particular patient infection rate of that given day. An example could be preemptively adjusting the resources, e.g., the number of extra treatment rooms or the number of spots used in the waiting zone to the necessary level so that the emergency department during this day, e.g., never crosses the crowding 30 level.
The next logical step could be to combine the model with any predictive model that estimates the number of patients arriving at the emergency department, such as the workings of Phan et al. [39]. This article has, through the patient flow performance indicators, highlighting the disparate burden the emergency department experiences during a day of average patient influx and a day of high patient influx. If there was to be found a reliable model to predict a probable range of patient influx for any given day, the exemplified above-mentioned strategy could be used for the same balancing purpose.

Conclusions
The COVID-19 pandemic put the global healthcare system in a new and uncertain situation. In many ways, it revealed how fragile it was for such an unprecedented healthcare crisis and that it could drastically affect the systems even in these modern times. Now, gathering knowledge from the situation is more important than ever to learn from what has happened.
To investigate the effect of pandemic conditions on the emergency healthcare, the present study endeavored to answer the twofold research question posed in the introductory section of this manuscript. Firstly, investigating the impact of increased patient infection rate on the emergency department patient flow parameters, and secondly, examining the likely impact of increased patient infection rate on patient flow parameters in an emergency department. The results of this study have shown that for every configuration of the emergency department, an increment in all instances will lead to an incremental worsening of the patient flow metrics average length of stay and crowding within the emergency department. Depending on the scenario, some configurations have less sensitivity to patient infection rate while others have a larger sensitivity to the increase in patient infection rate. The highest sensitivity is found in the scenario that only includes a waiting zone to the emergency department, Scenario 2, which counterintuitively has an even higher sensitivity than the base case, which does not have any of the pandemic interventions implemented this holds both for the low and high influx days.
On the other hand, the lowest sensitivity to patient infection rate is found when only introducing the extra treatment rooms, Scenario 3, to the emergency department patient flow. This is also interestingly contrary to what one would expect, as one would find it more likely that the emergency department would run the best with both interventions implemented.
Although the lower patient infection rate sensitivity is found to be, Scenario 3, where only extra treatment rooms are introduced, it remains a question of finding a balance between patient cross-infection risk and patient flow performance, which strictly is not within the scope of this study to find the balance as this necessitates a holistic approach with departmental leaders knowledgeable and decision competence on the matter.
It was found that, particularly in Scenario 4, with the active waiting zone and extra treatment room interventions, it was not the scenario with the best patient infection rate sensitivity. However, the disparity was far less pronounced on a low influx day. Having this knowledge that this certain configuration is less sensitive towards an increase in patient infection rate when influx is low may inform emergency department leaders that different strategies have different outcomes depending on the particularities of the patient influx. If a model, such as the one utilized, was to be combined with a big-data model predicting patient influx for a certain date, it could lead to proliferating findings.