Previous Article in Journal / Special Issue
Creating Digital Twins to Celebrate Commemorative Events in the Metaverse
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Training First Responders Through VR-Based Situated Digital Twins

by
Nikolaos Partarakis
1,2,*,
Theodoros Evdaimon
1,
Menelaos Katsantonis
2 and
Xenophon Zabulis
1
1
Institute of Computer Science, Foundation for Research and Technology—Hellas (FORTH), GR-70013 Heraklion, Greece
2
Department of Applied Informatics, University of Macedonia, 156 Egnatia Street, GR-54636 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Computers 2025, 14(7), 274; https://doi.org/10.3390/computers14070274
Submission received: 26 May 2025 / Revised: 30 June 2025 / Accepted: 8 July 2025 / Published: 11 July 2025

Abstract

This study examines first responder training to deliver realistic, adaptable, and scalable solutions aimed at equipping personnel to handle high-risk, rapidly developing scenarios. The proposed method leverages Virtual Reality, Augmented Reality, and digital twins to enable immersive and situationally relevant training for security-critical incidents. The method is structured into three distinct phases: definition, digitization, and implementation. The outcome of this approach is the creation of virtual training scenarios that simulate real situations and incident dynamics. The methodology employs photogrammetric reconstruction, simulation of human behavior through locomotion, and virtual security systems, such as surveillance and drone technology. Alongside the methodology, a case study of a large public event is presented to illustrate its feasibility in real-world applications. This study offers a comprehensive and adaptive structure for the design and deployment of digitally augmented training systems. This provides a practical basis for enhancing readiness in a range of operational domains.

1. Introduction

Equipping first responders with the right training to be able to perform in dangerous and dynamic situations is an important aspect of public safety. The unpredictable nature of contemporary emergency events, such as terrorist attacks and large-scale public disorder, requires personnel to possess not just procedural knowledge but also decision-making abilities supported by context. Traditional training methods do not provide stable, scalable, and secure environments consistently. This is attributed to the constraints of physical training environments, which have numerous considerations for the ideal execution of training, including logistical expenses, safety issues, and poor adaptability.
These constraints, and the emergence of contemporary technological advancements, have created an increasing need for virtual simulation technologies. This requirement can be fulfilled through immersive technologies such as Virtual Reality (VR), Augmented Reality (AR), and Digital Twins (DTs). They have the potential to create new possibilities for connecting theoretical education and practical readiness, and enhancing the accessibility and variability of training technology. It is now feasible to utilize the aforementioned technologies to establish interactive, context-sensitive learning environments that simulate the conditions and problems of the actual operating environments in a realistic manner. Regardless of the increased availability of such technologies, there is still a shortage of methodologies that can inform their usage in the specialized context of first responder training. Current solutions tend to remain domain-specific, scenario-bound, or disjoint in technological integration.
In this paper, we provide an overarching, adaptable, and reproducible VR-based DT design and deployment process. The process is divided into three distinct phases: (1) Definition, in which threats are defined, security policy is established, and training objectives are identified; (2) Digitization, in which the physical space is redeveloped and integrated with digital infrastructure; and (3) Implementation & Training, in which interactive simulations are authored, rendered, and executed for facilitating immersive learning.
The relevance of the approach is illustrated using a detailed case study of a simulated terrorist attack at a major public gathering in an urban European setting.

2. Background and Related Work

The training of first responders is a critical component of public safety, requiring the development of skills that are both complex and context-dependent. Traditional training methods, such as classroom instruction and live drills, often fall short of providing realistic, scalable, and repeatable scenarios [1]. Recent advancements in VR, AR, and DT technology offer promising solutions to these challenges, enabling immersive, situated learning environments that closely mimic real-world conditions.

2.1. Digital Twins in Training and Simulation

DTs, defined as virtual replicas of physical systems or environments, have gained traction across various industries, including manufacturing, healthcare, and urban planning [2]. According to [3], “A DT is a system-of-systems which goes far beyond the traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart” [3].
The concept of a DT originated in the aerospace industry, where they replicated and monitored the behavior of aircraft components [4]. The technology has since grown to include a large variety of applications, from predictive maintenance in industries [5] to personalized medicine [6]. In training, DTs provide a dynamic and interactive space for simulating real-world scenarios. As an example, in industrial settings, DTs have been used to train operators on equipment maintenance and emergency response with significant enhancement in skill acquisition and retention [7]. The ability to create highly accurate and dynamic virtual environments renders DTs highly suited to training first responders, where situational awareness and decision-making under duress are paramount [8].

2.2. VR/AR-Based Training for First Responders

VR has emerged as a powerful tool for immersive training, with the potential to offer a platform for safe and controlled rehearsal of dangerous scenarios. VR-based training applications have been around since the 1990s. The technology has matured considerably since that time, with high-end VR systems offering high-fidelity graphics, true haptic feedback, and seamless interactivity [9,10]. Studies have shown that VR-based training enhances spatial awareness, process understanding, and stress management among first responders [11]. For instance, VR simulations have been successfully employed to train firefighters in building navigation and hazard recognition, and the performance has been superior to traditional methods [12,13]. Similarly, VR has been used to teach paramedics emergency medical procedures, where trainees have demonstrated enhanced confidence and proficiency [14]. The multimodal nature of VR, which combines visual, auditory, and haptic stimulation, markedly increases the legitimacy and effectiveness of training sessions [15].
AR has also drawn more attention as a complementary technology to VR for first responder training. As opposed to VR, which fully immerses users in the virtual world, AR superimposes digital information on the real world, allowing trainees to interact with both virtual and physical objects simultaneously [16]. This makes AR especially useful for hands-on training setups and real-time decision-making procedures. For example, AR has been used to guide firefighters through smoky conditions by overlaying directional information on their helmets [17]. Similarly, AR systems have been developed to assist paramedics in performing complex medical procedures by providing step-by-step visual instructions [18]. Portability and integration into the real world are some factors that make AR a valuable tool for first responders who need to train in diverse and unpredictable environments [19].

2.3. Integration of VR, AR, and Digital Twins

The integration of VR, AR, and DT technology represents a significant advancement for practice training. By integrating VR and AR interfaces into DT spaces, trainees are able to interact with highly realistic and context-rich scenarios. This approach has been applied in the healthcare domain, where hospital settings have been digitally twinned in VR to train medical staff for emergency response [20]. Similarly, the VR-powered DT has been applied in disaster management to simulate natural disasters, providing first responders with experience in the coordination of mass rescues [21]. The convergence of VR, AR, and DTs also supports real-time data, allowing trainees to respond to dynamic changes in the environment [22,23,24]. For example, in factory training, DTs can simulate equipment failure, and AR provides real-time guidance for troubleshooting and repair [25].

2.4. AI-Driven Simulations and Adaptive Training

Artificial intelligence (AI) is increasingly being incorporated into VR and AR training platforms to build highly individualized and adaptive learning experiences. AI computer programs can measure trainee performance in real time and tailor the difficulty and complexity of scenarios to the unique skills of the individual [26]. For example, AI-powered VR simulations were designed to train police officers in de-escalation techniques, where the system automatically changes virtual characters’ behavior based on the trainee’s behavior [27]. Similarly, AI-powered AR systems were used to provide real-time feedback to paramedics in training exercises so they can streamline their skills and improve decision-making [28]. The application of artificial intelligence in training programs not only increases the quality of simulations but also generates critical data for assessing trainee development and areas requiring improvement [29].

2.5. Haptic Feedback and Multimodal Interaction

Haptic feedback systems, which offer simulations of haptic sensations to the user, are important for some forms of immersive training systems. By simulating touch, haptic devices enable trainees to interact with virtual objects more realistically and naturally [30]. An example is the utilization of haptic gloves in VR firefighter training for simulating equipment handling and movement through rubble [31]. Likewise, haptic feedback has been incorporated into AR systems for the delivery of tactile assistance in procedures like intravenous (IV) line insertion and cardiopulmonary resuscitation (CPR) [32]. The combination of haptic feedback with visual and auditory stimuli creates a fully multimodal training experience, thereby increasing both participant engagement and learning outcomes [33].

2.6. Collaborative VR/AR Environments

Mixed VR and AR environments where multiple users are collaborating in the same virtual or augmented environment are also being used in new applications in first responder training. These systems enable groups of individuals to practice coordination and communication in complex scenarios, e.g., responding to a large disaster or negotiating a hostage situation [34]. Collaborative VR-based platforms, for example, have been developed to provide training for emergency response units to coordinate search and rescue operations [35]. Similarly, AR technology has been utilized to facilitate real-time communication between command centers and field responders, improving decision-making and situational awareness. Having the capability to train in a collaborative environment is particularly critical for first responders, who are likely to be operating in teams and in need of good communication in order to accomplish the task.

2.7. Contribution

Despite growing interest in the application of immersive technologies for training, this study identifies significant areas where the existing literature is not providing sufficient coverage. One of the key gaps concerns the absence of generalizable approaches for the implementation of VR-based situated Digital Twins for first responders’ training. The approach proposed by this research work incorporates scenario definition, 3D digitization, locomotion-driven Virtual Human (VH) behavior, and immersive VR-based rendering to create an innovative end-to-end training pipeline. Although numerous studies have deconstructed these elements separately—i.e., digitization of spaces to facilitate virtual simulation [36], or simulation of virtual agents’ movement [37]—there is limited current research that illustrates the method for integrating these elements into an adaptive methodology that can be applied to a broad variety of scenarios.
This paper bridges the existing gap by proposing and demonstrating an integrated approach that enables end-to-end scenario generation, spanning from threat and policy identification to physical environment digitization and incorporation of simulated human behavior into completely rendered VR training environments. The approach is modular and flexible, hence enabling potential reuse in a broad spectrum of training contexts beyond the illustrative cases shown.
Despite this, a continued imbalance exists in standardizing validation and assessment techniques for immersive training environments. While this project presents a framed validation process to help guarantee all environments operate as planned, the field still has a more complete need for unified metrics and validation procedures. We complement this gap by planning a user-based evaluation, further to the performed validation, so as to further assess not the methodology but the formulated training scenarios. These results and the evaluation framework will be reported in the future in a separate publication. In this work, we present the validation process and highlight our plans for evaluation.
In conclusion, the study makes important contributions to the methodological and technical state of the art of immersive training systems based on DTs. The following presentation on this research work focuses highly on establishing the methodology and providing guidance for its replication across various contexts with the ambition to rationalize the design and implementation of training scenarios in the future.

3. Proposed Methodology

The methodology proposed by this work is a structured framework for developing immersive security training solutions through three phases: Definition, Digitization, and Implementation & Training. This methodology is designed to be generic enough to be broadly applicable across various security scenarios, enabling the creation of realistic virtual training environments.
In the Definition Phase, the security context of the training scenario is established. This involves analyzing potential threats and identifying key locations, relevant participants, and critical events that may unfold. Additionally, appropriate security policies and measures are formulated to address these risks. The result of executing this initial phase is the definition of a structured training scenario that ensures that the virtual environment to be implemented reflects realistic security challenges.
Following this, the Digitization Phase transforms physical spaces into detailed digital counterparts. The process begins with data acquisition, using technologies such as 3D scanning or photogrammetry to capture spatial and structural data. The acquired data are used for 3D reconstruction, which has as a result the provision of an accurate virtual space that is a copy of the physical world. Post-processing techniques further develop the environment, with increased visual fidelity and interactivity. This phase also involves the digital security infrastructure, where the virtualized monitoring systems, access controls, and other security features are included. To mimic realism, human behaviors are included using locomotion to simulate realistic interactions and responses within the environment. Together, these features form a DT that accurately mirrors both the physical environment and its security dynamics.
The Implementation & Training Phase facilitates the implementation of a virtual environment to build interactive training scenarios. This involves scripting scenario logic to define event sequences and expected security responses. A scenario execution environment is established to facilitate interaction and system responsiveness. The approach leverages real-time 3D rendering and VR to provide an immersive experience, allowing users to interact with the simulation dynamically. Trainees can move around in the virtual world, interact with security measures, and react to changing situations, thereby developing their situational awareness and decision-making capabilities in a risk-free and controlled setting.
An overview of the proposed methodology is graphically illustrated in Figure 1. This methodology can be applied across a variety of security training domains, such as emergency response, infrastructure protection, law enforcement training, and corporate security training. Through the application of digital twins, virtual human interactions, and immersive VR technologies, the methodology supports adaptable and scalable training solutions configured to meet the unique security requirements in a variety of domains.
Of course, the applicability of the methodology cannot be validated by itself if not accompanied by use cases of its application. In the following section, an overview of the use case implemented on top of the provided methodology is presented. At the same time, in order to foster replicability of the methodology, technical details are presented, allowing, to a certain extent, follow-up on the implementation steps in producing custom training scenarios for use cases out of the context of this research work.

4. Use Case

In this section, we present how the use case implements a training scenario on top of a hypothetical event. The use case takes into account the physical spaces that will host the “Street Food Festival”. This is an annual event attracting thousands of visitors, taking place in a different European capital every year. Local and international food exhibitors, artists, music bands, and different entertainers take part. In 2025, the festival will take place in Athens, which is the capital defined as the Cultural Capital of Europe, giving more importance to the event. The event will cover a big open space of the capital, including squares, streets, and parks. During the opening ceremony, which will be held concurrently in the amphitheater of a local university and all the urban spaces allocated for the festival, big attendance is expected, and the presence of local agents, members of the Parliament, and members of the European Parliament is expected.

4.1. Definition Phase

4.1.1. Scenario Specification

In this use case, the scenario studied regards information coming to the local anti-terrorism authorities regarding an international terrorist organization, having conducted terrorist actions in different European capitals in the last decade, planning to carry out a massive attack during the opening of the event. It is expected that the threat will be implemented in the form of simultaneous attacks at different points in the area where the event is taking place.

4.1.2. Definition of Threats

According to the same sources, the terrorists (perpetrators) will start the attack when all the official guests are on the stage set up in the central square of the area where the event is taking place. The attack will happen with the use of firearms and the explosion of a trapped vehicle. The police managed to get information that the vehicle would be prepared in a warehouse located in a deserted area near Athens.

4.1.3. Design of Security Policies and Measures

The police take increased security measures and inspect the place of the event to find points of vulnerability that could be used by the terrorists. Based on the analysis of the event space, the following measures are identified that should be implemented in the security policies and measures:
  • Surveillance of two abandoned buildings near the event place that could be used by the terrorist organization. The two abandoned buildings should be thoroughly checked and secured within a perimeter;
  • Agreement with the prosecutorial authorities to use drones to monitor the area where the warehouse is located. The drone inspects the rural area where the warehouse is located. An operational center is set up for the coordination in real time of the police forces, the data given by the drones and cameras;
  • Installation of fixed cameras at critical points to monitor the entrances to the event area, as well as the outdoor space around the abandoned buildings;
  • Shut down of traffic on the roads around the square;
  • Forbid parking near the event location.
At the same time, the modus operandi of the terrorist organization is studied over time, using data from previous attacks by this organization as well as from terrorist attacks in general that occurred at such events and are analyzed.

4.1.4. Definition of Training Scenarios

Based on the above-mentioned information, the planning of security measures and the analysis of the modus operandi of the organization, the police formulated the following training scenarios that ideally should be studied in virtual training conditions:
  • Event #1. Scan the building where the ceremony will be held for malicious material;
  • Event #2. Suspect vehicle detection (five hours before the opening ceremony). A vehicle leaves the warehouse and is spotted by the drone. A few kilometers further on, it is chased by police vehicles and stopped. It was found that it was rigged with explosives and intended to be placed near the event area. The explosive mechanism is located and defused. The driver is arrested, and further investigations take place at the warehouse;
  • Event #3. Suspect arrest (an hour before the event starts). On one of the abandoned buildings, shut out by the police, human presence is located by the cameras an hour before the opening of the event. This human is found to have entered the blocked perimeter and is heading towards the building. They have a big backpack, and it is estimated by the operations center that they may be carrying weapons. The person is arrested by the ground special forces;
  • Step #4. Fight and panic detection (a few minutes before the end of the event). The fixed cameras detect a fight near the west entrance of the event, and immediately after that, a mass panic attack and quick departure of the people from that entrance. The operational center is notified, and the ground police forces arrest a person who has stabbed two of the people present.

4.2. Digitization Phase

The digitization phase regards the implementation of the DTs that will be facilitated in the above-mentioned training scenarios.

4.2.1. Digitization of Physical Locations

Data Acquisition
Data acquisition occurred at the location where police training is taking place. This location serves as a parametric installation where various training setups can be simulated. The rationale for the selection of the training area relates to the possibility given to performing both virtual and physical training exercises in the same training space in its physical and digital versions.
Several flights had to be conducted to extract the 3D outdoor models. The maximum depth of the scanned scene in our use case was approximately 80–100 m from the UAV, which results from the selection of the altitude for the majority of the flights (30 m) based on the UAV specs. Furthermore, for more troublesome locations within the scene, we made flights from a lower altitude of 10–15 m using manual operation of the UAV. The spatial precision achieved using a mixture of medium-altitude flights and manual low-altitude flights ranges from 2 to 5 cm, depending on the flight height and overlap (minimum 70% frontal and side overlap) between images. Most flights were conducted autonomously via the Pix4Dcapture [38] app and in the double-grid mode. We selected Pix4Dcapture and Pix4Dmapper [39] due to their robust integration with drone photogrammetry workflows and their capacity to deliver high-resolution 3D reconstructions optimized for VR experiences. Pix4Dcapture provides automated mission planning using double-grid paths, ensuring uniform coverage and minimizing data gaps. Pix4Dmapper offers efficient reconstruction, dense point cloud generation, and mesh creation with texture mapping, which are essential for creating immersive and realistic virtual environments [40]. Compared to other photogrammetry tools like Agisoft Metashape [41] or RealityCapture [42], Pix4D provides streamlined UAV compatibility, real-time processing previews, and cloud deployment options, which are advantageous in time-sensitive public safety contexts [8].
In addition, some free flying had to be conducted to avoid obstacles, and so a lower-altitude flight was conducted with the same application. Low-altitude flights are intended for covering parts of the space with physical occlusions that need a special photographic setup to be covered. After the data were collected, they were loaded into the photogrammetry programs. For the data with a low number of images, the program Pix4Dmapper [39] was used, and for the data with a large number of images, the program Pix4Dmatic [43] was used. The flight path selected was grid-wise, while a second grid, perpendicular to the first, was used for increased reconstruction robustness (see Figure 2).
A problem with this approach was that the segments of interest in a scene may not be visible from aerial views, such as the scene locations below the eaves of buildings. This is partially compensated through low-altitude manual scans. At the same time, due to the inherent limitations of the method, the achieved resolution may be sufficient for display purposes but needs further enhancement using post-processing techniques to achieve an environment that can be experienced in VR and 3D.
A total of 1692 photos were taken: 1000 from a high altitude and 692 from a low altitude and manual flight. Three high-altitude, one low-altitude, and one manual flight were coducted, with an average of 350 photos per flight. Indicative sequential photos of the site are presented in Figure 3.
Reconstruction
After processing the photogrammetry program exports, a 3D model consisting of a fairly good structure with very good textures is created. Several views of the reconstruction are presented in Figure 4. As shown in this figure, the reconstruction has enough detail to present the scene from a distance.
Moving closer and enabling shading reveals several falls in the reconstruction, which are also visible when rendering the reconstructed mesh structure without textures. Examples of such falls can be seen in Figure 5. Thus, a critical step in preparing the virtual scene for VR-based simulation is the refinement of the model through the removal or correction of problematic elements. Raw photogrammetric outputs often contain visual noise, redundant geometry, or incomplete surfaces, particularly in areas with occlusions, reflective materials, or complex architectural features such as overhangs and narrow gaps. These artifacts can impair the realism, usability, and navigability of the virtual environment and may hinder the effective execution of training scenarios.
The simplest methodology that can be followed for enhancing the quality of the reconstruction is segmenting and cleaning reconstruction segments, and when possible, replacing reconstructed artifacts of low quality with higher-quality digital 3D models. This is an exceptionally good method when dealing with artifacts of no historical, archaeological, or scientific value. In our case, the containers are such a case. Cutting off all the containers and then replacing them with 3D models is very straightforward and requires fewer resources than manually remodeling the environment. The same stands for the basketball court, where a lot of errors exist on the fence and the court itself.
Figure 6 presents the various segments created from the scene and how these were cleaned using a simple height extraction operator, i.e., cutting off all content over a specific height in the model. The resulting segments are merged again to proceed with the post-processing operation.
The main terrain of the DT is created by remerging the cut segments and then enhancing it with 3D objects. In this use case, a shipping container was used and edited to include custom windows and a door to replace the ill-reconstructed initial structures. Furthermore, the basketball court was replaced with 3D models, and the location of the food festival, as indicated by the scenario, was modeled. The resulting scene is presented in Figure 7.
For the internal event case, an amphitheater DT was employed in this scenario as it also provides the option of exploiting ready-to-use components as part of the execution of the proposed methodology. An overview of the amphitheater used in the scenario is presented in Figure 8.
Until now, the methodology is replicable but needs to be applied in each use case for the virtual environment to be generated. From this point forward, the proposed methodology systematizes the implementation by providing reusable components.
Justification for the Selection of Digitization Technology
The selection of photogrammetry over alternative reconstruction methods such as LiDAR or 360° panoramic imaging was driven by a trade-off between spatial accuracy, scene coverage, operational efficiency, and cost. UAV-based photogrammetry provides sufficient accuracy (2–5 cm per pixel) for immersive VR applications and allows for rapid acquisition of large-scale environments in a single day. Multiple flights at varying altitudes and angles ensure robust reconstruction, even in occluded or structurally complex areas.
Although LiDAR scanning can achieve sub-centimeter accuracy and better geometric fidelity in occluded or vegetated areas, its deployment in the field is often constrained by high equipment and processing costs, logistical overhead (especially when scanning from multiple positions), and the need for additional post-processing to enhance poor-quality textures. As LiDAR outputs are typically low in color fidelity, the preparation of photorealistic training environments requires the overlay of supplementary texture data or significant manual editing.
On the other end of the spectrum, 360° video and panoramic imaging techniques, while lightweight and efficient for generating navigable environments, lack depth information and do not support metrically accurate 3D reconstructions. These approaches are suitable for passive exploration or training that does not require object interaction or spatial measurement, but are suboptimal for dynamic and interactive VR-based simulations. Furthermore, they rely on specialized cameras and proprietary software platforms, limiting their flexibility and generalizability in custom simulation pipelines.
In this use case, UAV-based photogrammetry offered the best balance between fidelity, scalability, and practicality. This, of course, is not a constraint applied by our methodology. The methodology applies to any digitization technology. The careful planning and selection of digitization technology is a prerequisite for acquiring the best possible reconstruction.

4.2.2. Implementation of Digital Security Infrastructure

This section details the integration of digital surveillance infrastructure, which plays a pivotal role in enabling situational awareness training. The virtual CCTV and drone systems replicate real-world monitoring workflows and allow users to interactively review feeds, identify security breaches, and make operational decisions.
Virtual surveillance cameras replicate real-world closed-circuit television (CCTV) systems within the virtual environment. These cameras provide real-time monitoring of specific locations by capturing virtual space and rendering the feed onto in-world screens. A similar approach is the use of virtual drones, which replicate real-world drone-based surveillance of a location of interest and render results into the virtual space. In both cases, a Unity 3D camera component is implemented that renders output in screen space coordinates and that can be placed wherever in the scene.
A virtual surveillance camera in Unity 3D is implemented by creating a Camera object positioned at a specific location within the virtual environment. Instead of rendering its output directly to the user’s view, the camera’s feed is redirected to a Render Texture, which acts as a dynamic image source. This texture is then applied to a UI Panel or a 3D mesh, allowing the captured scene to be displayed as a live video feed on in-game monitors or interfaces. To ensure smooth performance, optimizations such as reducing the Render Texture resolution, adjusting the camera update frequency, and implementing occlusion culling techniques can help manage the computational load.
The drone is a similar component on which a flight path is attached. The flight path is a path component with a moving animation applied to the camera. A virtual drone camera in Unity 3D differs from a fixed surveillance camera in that it follows a predefined or dynamically generated path to capture an aerial view of the scene. This is achieved by attaching a Camera object to a moving GameObject that follows a scripted trajectory. The path can be created using animation curves, waypoints, or procedural movement algorithms to simulate realistic drone flight behavior. The camera feed is rendered to a Render Texture, allowing it to be displayed on in-game monitors or interfaces. Unlike static surveillance cameras, drone cameras provide dynamic perspectives, covering larger areas and adjusting their view based on mission requirements. Optimizations such as adjusting movement smoothing, frame rate adaptation, and culling strategies help maintain performance while ensuring a fluid and realistic simulation.
Aspects of these components, including the moving animation pace, are editable through the Unity 3D editor.

4.2.3. Realistic Virtual Human Participation

To further increase the sense of realism and contextual richness in the virtual training space, the simulation features dynamically generated VH entities that mimic the movement of civilians present at a live public event. The additional simulation layer is designed to replicate the background motion and crowds commonly encountered in large gatherings to enable trainees to engage in scenarios characterized by realistic environmental factors and human activity. The VHs employed in animation adhere to a locomotion system allowing free movement about the environment following precomputed or random paths. The waypoint-based locomotion system is built upon Unity’s NavMesh framework. The Unity NavMesh framework provides tools for creating navigable areas in your scenes, allowing AI-controlled characters to navigate and avoid obstacles [44]. Movement paths are influenced by steering behaviors derived from Reynolds’ Boids algorithm, including cohesion, alignment, and separation [45]. Points of interest (POIs), such as kiosks or stages, act as attractors with weighted probabilities that affect agent path selection. This configuration yields realistic crowd dynamics and emergent spatial clustering [37]. Furthermore, their movement is regulated by specific parameters; the simulation contains predestined points of interest—vendor booths, performance stages, or signage—that attract the attention of nearby agents. This results in the formation of organically occurring clusters or overpopulated areas, reflecting the crowd behavior generally expected with a lively urban gathering.
On top of the general patterns of crowd behavior, the system also accommodates the inclusion of predefined or choreographed behavior for particular agents, hence the simulation of more complicated and event-dependent behaviors. In these instances, individual virtual entities are brought to life by utilizing motion capture (MoCap) data, thereby facilitating realistic and natural motion in critical interactions. Critical agent actions within high-risk scenarios are animated using motion capture data. This approach enables nuanced and lifelike behaviors, such as in suspect apprehension (Event Scene #3) and crowd panic simulations (Event Scene #4), enhancing the immersive quality and behavioral realism of the training exercises.
This aspect is especially vital in training exercises related to security threats or societal disruptions, for instance, conflict, panic responses, or aberrant behavior. Through the combination of large-scale ambient human presence and painstakingly detailed behavior-driven agents, the system portrays a sophisticated and subtle simulation of crowd behavior, thereby facilitating a wide range of operational and behavioral training scenarios. The hybrid approach compromises between computational efficacy and behavioral complexity, leading to an effective model for immersive situational awareness and response training.

4.2.4. Digital Twin Implementation

The final portion of the digitization process is combining all the virtual components into a DT that accurately depicts the physical characteristics and operational behaviors of the real-world environment. The deployment of this combination unifies the 3D reconstruction of the environment with the digital security system and simulation of human movement and activity, producing an interactive and immersive virtual representation of the training environment.
Central to the DT concept is the reconstructed spatial model developed via photogrammetric techniques and subsequently sharpened by post-processing actions. The model serves as the structural basis upon which further functional components are then built on top of. In the case of simulating real-world surveillance configurations, virtual surveillance systems are integrated based on the actual space characteristics and the scenarios under investigation. Furthermore, they are associated with in situ virtual displays to allow users to examine surveillance data in real-time.
The utilization of VHs significantly improves the behavioral realism of the DT. These agents inhabit the world, move autonomously, and display movement patterns motivated by attention that are defined by specific points of interest, thus producing realistic crowds and areas of congestion. At the same time, scenario-dependent human activities, i.e., disruptions or threats, are introduced through MoCap-controlled characters with precisely scripted actions to fit the training goals. All of the elements, when incorporated in one simulation platform, formulate the so-called DT.

4.3. Implementation and Training Phase

In the initiation of this stage, the DT is available, and all the static behaviors of the system have been set up. This practically means that only the behavior of the system under specific scenario circumstances is needed to be able to provide event-based training.

4.3.1. Setting Up Event Scenes

Using the implemented DTs, four event scenes were created by integrating the appropriate components and actions into the created DTs.
  • Event scene #1. Scan the building where the ceremony will be held for malicious material. In this event, suspicious people and objects were integrated into the scene to be scanned and deactivated by the security personnel.
  • Event scene #2. Suspect vehicle detection (five hours before the opening ceremony). A vehicle in a distant location and an abandoned warehouse are to be located by the patrolling police cars.
  • Event scene #3. Suspect arrest (an hour before the event starts). Detection of a threat and proceeding to the arrest of a suspicious person.
  • Event scene #4. Fight and panic detection (a few minutes before the end of the event). Detection of the area where the fight occurs. Stopping the fight and calming people attending the event.

4.3.2. Implementing Training Scenarios

The implementation scenarios regard the orchestration of the event logic that binds the event scenes with the actions and activities performed by the participants. The following Figures present instances from the studied scenarios. More specifically, Figure 9 presents the amphitheater incidents, Figure 10 presents the chase incident, Figure 11 presents the car-chase incident, and Figure 12 presents the fight incident.

4.3.3. User Interface and Interaction

Participants interact with the training environment either in 3D or in VR. In the 3D version of the prototype, navigation replicated the standard key structure followed by first-person shooter games (keys W, A, S, D, and space; mouse for camera control; left click for interaction with objects). Virtual control panels are accessed through shortcut keys (keys 1 to 5). In the VR variation, navigation within the scene is handled by joystick movement or teleportation prompts. CCTV and drone feeds are accessed via virtual control panels that allow scene switching and monitoring perspective changes. The XR Interaction Toolkit in Unity was used to implement Head-Mounted Display (HMD) overlays and in-world menus. These allow seamless transitions between training phases and scenario setups.

4.4. Validation

To ensure the stability and effectiveness of the created training environment, a validation process was carried out. The process focused on verifying the correct setup and functionality of all event scenes and training scenarios outlined in the design and implementation phases. Every scenario was thoroughly analyzed, and interactions, stimuli, and repercussions were carefully examined concerning the expectations.
Regarding operational components, the validation included testing of surveillance elements, AI-based behavior of virtual humans, adaptive reactions of Motion Capture technology-based agents, and accurate progression of the environment in various phases of the training procedure. Particular emphasis was placed on scenario logic being aligned with visual and behavioral cues, which present users with an accurate chain of events throughout their training exercises. Failure modes and edge cases were also investigated to guarantee the system’s robustness in less-than-perfect conditions. Iterative revisions were implemented wherever differences or unforeseen behavior were observed, further adding to the overall stability and fidelity of the virtual experience. Through this validation process, a “Scenario Validation Checklist” was employed to record scenario quality and support further improvement. This checklist is presented in Appendix A.

4.5. Evaluation

Within the SAFEGUARD project, there will be an extensive user test at the pilot stage to determine the effectiveness, usability, and learning value of the training scenarios developed. This test will be important in validating the scenarios implemented using the proposed methodology and not the methodology itself, which was in-depth validated during the implementation and validation phase. In this setup, we are going to assess the training goals and how these are achieved through the implemented virtual training platform. Another aim of the evaluation will be to assess the usability of the interactive VR training system and its effectiveness in achieving the predefined training objectives. Specifically, the evaluation will analyze the degree of intuitiveness and user involvement of the system, whether the training simulations are found to be realistic and instructive, and the degree to which users believe the experience contributes to their knowledge of security protocols and threat response methods.
The testing process will include a range of end-users, such as trainees, trainers, and security officers, who will utilize the system in controlled pilot environments. Users will be guided through a carefully chosen set of training scenarios that cover a variety of threat scenarios. Data will be gathered by systematic observation, recording of user behavior during scenarios, and questionnaires after sessions. Feedback will revolve around scenario clarity, interaction ease, system responsiveness, perceived realism, and perceived learning outcomes by the users.
The assessment will utilize a mixed-methods methodology, blending quantitative metrics (e.g., task completion rates, error rates, and Likert scale ratings) with qualitative input (e.g., open-ended survey questions and verbal interviews). In this way, both quantifiable performance measurements and an in-depth examination of user experiences and expectations will be possible. In conclusion, the results from this pilot assessment will feed into the development of the training framework, both in terms of technical refinement and enrichment of content. It will also help to define best practices in the implementation of equivalent training frameworks in other operational contexts, thereby increasing the overall transferability and scalability of the SAFEGUARD methodology.

5. Discussion and Conclusions

This study presents a systematic and modular approach to designing and implementing immersive training environments for preparing individuals to handle complex security situations. Within the SAFEGUARD project, a DT-based virtual training environment has been created, integrating authentic scene digitization, virtual surveillance techniques, human behavior modeling, and scenario scripting to facilitate immersive VR experiences.
Relative to previous work, this effort focuses on integrated scenario definition, digital modeling, and immersive training on and beyond the foundations established in the field of virtual training for emergency response and threat mitigation. For instance, previous efforts contemplated VR-based emergency training systems for disaster and fire preparedness and stressed the importance of immersive realism for improved learning outcomes [46,47,48,49,50]. These works demonstrated the educational benefits of dynamic virtual environments for security training, focusing on user engagement and retention.
The presented approach exemplifies the experiential learning theory [51], immersing first responder trainees in realistic scenarios through the use of VR and DT technologies [52]. At its core, experiential learning holds the “learning by doing” paradigm, where individuals actively engage in experiences and then reflect on those experiences to foster knowledge, skills, and attitudes. This VR approach embodies this by moving beyond traditional training methods, which disseminate information on procedures or potential threats, by immersing trainees in high-fidelity and interactive digital replicas of real-world environments.
The presented approach aligns well with the stages of Experiential Learning of Direct Experience, Reflective Observation, Conceptual Understanding, and Active Experimentation. During training, trainees engage with virtual environments mirroring real-world physical spaces such as public venues and sites, directly experiencing the scenarios (i.e., Direct Experience), such as scanning buildings, detecting suspicious vehicles, or responding to dangerous crowd dynamics, as described in the use case. Trainees must navigate the scenarios, make decisions, respond to emerging threats, and witness the immediate consequences of their actions in a safe environment. The evaluation, through the validation checklist, involves observation, feedback, and analysis of the trainee’s performance, facilitating the reflective observation phase of Experiential Learning. This allows them to understand the practical application of security policies and the importance of responding appropriately to threats in a dynamic context. According to the outcomes, trainees can form “abstract conceptualizations” and comprehend lessons about effective strategies and critical thinking under pressure. The ability to replay scenarios, potentially with variations, trains trainees in the Active Experimentation phase, reinforcing learned concepts and behaviors and improving their readiness. The methodology focuses on realism, AI-driven human behavior, and interactive digital infrastructure, ensuring these experiences promote immersion, deeper learning, and skill retention than passive methods.
Furthermore, the presented approach positions trainees at the center of complex, realistic situations requiring problem-solving, critical thinking, and decision-making under pressure. As a problem-based learning approach [53], trainees gain knowledge and skills by working to resolve open-ended problems that simulate real-world challenges facilitated by VR-based and DT technologies. Trainees must actively analyze the simulated environment, identify critical information in the scenario’s context (e.g., AI-driven crowd behavior, scan feeds), formulate hypotheses about potential threats, and decide on courses of action. During this process, trainees recall prior knowledge, identify learning gaps, and formulate solutions.
The current methodology adds to prior knowledge by presenting an adaptable framework that can be customized to many different contexts and threat models, rather than a simulation for one specific purpose. The addition of VH interaction provides an additional layer of realism, enabling dynamic and emergent behavior that better replicates crowd action and individual human reactions. Also, the modular structure allows scalability, thus making it possible for the simulation to be extended or reconfigured based on new scenarios or updated threat models.
Compared to previous systems that tended to be based on rigid, predetermined training paths, our system promotes variation in events and user interaction, thereby enabling adaptive training sessions. This follows research [54] that identifies dynamic training conditions that enhance user readiness and situational awareness.
The use of real-time VR rendering also distinguishes this work by delivering a completely immersive experience that is interactive and measurable in terms of learning outcomes. As has been previously reported, embodied cognition and sense of presence play a significant role in the effectiveness of virtual training [10]. This immersion is enabled by our implementation through accurate 3D reconstructions and choreographed training exercises derived from real-world examples.
Another topic of discussion is the use of VR headsets in conjunction with the associated human factors/ergonomic issues that arise from this technology [55]. In this direction, we would like to stress that there are alternative display technologies that offer promising potential, specifically, LED walls and laser projection systems used in room-scale simulators such as CAVEs (Cave Automatic Virtual Environments). These systems allow for immersive visualization through large-scale projections onto surrounding walls or curved screens, providing a more natural field of view and reducing the physical and cognitive burden associated with wearing headsets. Studies have shown that while HMDs offer higher immersion, CAVE systems provide equivalent performance in tasks like distance perception and can reduce physical fatigue (e.g., the weight of an HMD) [56]. These display setups are beneficial for all cases and especially useful in training contexts where multiple trainees need to experience the same environment simultaneously and the use of HMDs is impractical due to medical, ergonomic, or accessibility reasons. For first-responder training, immersive CAVE-based simulations have been effectively used in scenarios like mass-casualty incidents, enabling trainees to interact safely with virtual patients and environments while being observed and assessed in real time [57]. Comparisons between CAVE and HMD setups indicate that while both are effective, CAVEs may better support collaboration and accessibility, whereas HMDs excel in portability and individualized interaction [58]. While our present implementation focuses on VR headsets for their portability and interactivity, the underlying simulation platform is compatible with CAVE systems and dome-based projection environments. The Unity-based environment, with its modular rendering pipeline, can be easily reconfigured to output to multiple external displays or projection surfaces.
In conclusion, this study presents a robust and extensible approach to immersive VR-based training for security and emergency response applications. It advances current research by increasing realism, scalability, and flexibility, and positions itself for follow-on validation through pilot user trials. The insights developed in this implementation have the potential to serve as a template for developing comparable training solutions for a broad range of high-stakes decision-making and coordinated human response applications.

Author Contributions

Conceptualization, N.P. and X.Z.; methodology, N.P. and X.Z.; software, N.P.; validation, M.K.; formal analysis, T.E.; investigation, N.P. and X.Z.; resources, T.E.; data curation, T.E.; writing—original draft preparation, N.P. and M.K.; writing—review and editing, N.P., X.Z., T.E. and M.K.; visualization, N.P. and M.K.; supervision, N.P.; project administration, N.P. and X.Z.; funding acquisition, N.P. and X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work has received funding from the European Commission under the program Internal Security Fund (ISF) 2021–2027, specific Action “Support for innovation and new technologies for the protection of public spaces—Innovation PPS II”.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Acknowledgments

We would like to thank the anonymous reviewers for their contribution to improving the quality of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree-Dimensional
AIArtificial Intelligence
ARAugmented Reality
CAVEsCave Automatic Virtual Environments
CCTVClosed-Circuit Television
CPRCardiopulmonary Resuscitation
DTDigital Twin
HMDHead-Mounted Display
ISFInternal Security Fund
IVIntravenous
MoCapMotion Capture
PPSProtection of Public Spaces
UIUser Interface
VHVirtual Human
VRVirtual Reality

Appendix A. Scenario Validation Checklist

Table A1. The checklist used for scenario validation.
Table A1. The checklist used for scenario validation.
CategoryValidation CriteriaStatusComments
Scenario InitializationScenario loads correctly and without delay.
All relevant assets (models, textures, animations) are properly loaded.
The initial camera view and environment state are correct.
Logic and Trigger EventsAll scripted events activate as defined (e.g., security breach, crowd movement).
Timing of sequential actions follows the scenario logic.
Event triggers are activated under the correct conditions.
Reset and replay scenario functions correctly.
Virtual Humans BehaviorLocomotion-based agents appear at randomized intervals and locations.
Crowd density increases naturally near points of interest.
MoCap-driven actors exhibit correct pre-defined behavior (e.g., altercation).
Avoidance and collision logic among agents are functioning.
Spatial and Environmental ConsistencyPoints of interest are correctly located and accessible.
Environmental lighting and audio match the scenario context.
All surfaces and interactive areas are navigable.
Surveillance and MonitoringVirtual surveillance cameras display correct render feeds.
Camera perspectives cover the expected scene areas.
Display panels update live camera feed in real-time.
Performance and StabilityStable frame rate during the scenario execution.
No major visual glitches, audio lags, or physics bugs.
Low system resource usage consistent with the intended platform.
Training OutcomesAll learning objectives for the scenario are met.
Assessment or feedback systems (if present) function correctly.
User InteractionNavigation, user actions, or decisions are accurately captured.
User prompts, guides, or HMDs are functional and context-appropriate.

References

  1. Baetzner, A.S.; Wespi, R.; Hill, Y.; Gyllencreutz, L.; Sauter, T.C.; Saveman, B.I.; Mohr, S.; Regal, G.; Wrzus, C.; Frenkel, M.O. Preparing medical first responders for crises: A systematic literature review of disaster training programs and their effectiveness. Scand. J. Trauma Resusc. Emerg. Med. 2022, 30, 76. [Google Scholar] [CrossRef] [PubMed]
  2. Grieves, M.W. Digital twins: Past, present, and future. In The Digital Twin; Springer International Publishing: Cham, Switherland, 2023; pp. 97–121. [Google Scholar]
  3. Mihai, S.; Yaqoob, M.; Hung, D.V.; Davis, W.; Towakel, P.; Raza, M.; Karamanoglu, M.; Barn, B.; Shetve, D.; Prasad, R.V.; et al. Digital twins: A survey on enabling technologies, challenges, trends and future prospects. IEEE Commun. Surv. Tutor. 2022, 24, 2255–2291. [Google Scholar] [CrossRef]
  4. Glaessgen, E.; Stargel, D. The digital twin paradigm for future NASA and US Air Force vehicles. In Proceedings of the 53rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics and Materials Conference 20th AIAA/ASME/AHS Adaptive Structures Conference 14th AIAA, Honolulu, HI, USA, 23–26 April 2012; p. 1818. [Google Scholar]
  5. Tao, F.; Zhang, M.; Nee, A.Y.C. Digital Twin Driven Smart Manufacturing; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
  6. Barricelli, B.R.; Casiraghi, E.; Fogli, D. A survey on digital twin: Definitions, characteristics, applications, and design implications. IEEE Access 2019, 7, 167653–167671. [Google Scholar] [CrossRef]
  7. Omrany, H.; Al-Obaidi, K.M.; Ghaffarianhoseini, A.; Chang, R.D.; Park, C.; Rahimian, F. Digital twin technology for education, training and learning in construction industry: Implications for research and practice. Eng. Constr. Archit. Manag. 2025. [Google Scholar] [CrossRef]
  8. Fuller, A.; Fan, Z.; Day, C.; Barlow, C. Digital twin: Enabling technologies, challenges and open research. IEEE Access 2020, 8, 108952–108971. [Google Scholar] [CrossRef]
  9. Stone, R. Virtual reality for interactive training: An industrial practitioner’s viewpoint. Int. J. Hum.-Comput. Stud. 2001, 55, 699–711. [Google Scholar] [CrossRef]
  10. Slater, M.; Sanchez-Vives, M.V. Enhancing our lives with immersive virtual reality. Front. Robot. AI 2016, 3, 74. [Google Scholar] [CrossRef]
  11. Abbas, J.R.; Chu, M.M.; Jeyarajah, C.; Isba, R.; Payton, A.; McGrath, B.; Tolley, N.; Bruce, I. Virtual reality in simulation-based emergency skills training: A systematic review with a narrative synthesis. Resusc. Plus 2023, 16, 100484. [Google Scholar] [CrossRef]
  12. Tate, D.L.; Sibert, L.; King, T. Using virtual environments to train firefighters. IEEE Comput. Graph. Appl. 1997, 17, 23–29. [Google Scholar] [CrossRef]
  13. Lee, J.; Cha, M.; Choi, B.; Kim, T. A team-based firefighter training platform using the virtual environment. In Proceedings of the 9th ACM SIGGRAPH Conference on Virtual-Reality Continuum and Its Applications in Industry, Seoul, Republic of Korea, 12–13 December 2010; pp. 299–302. [Google Scholar]
  14. Schild, J.; Lerner, D.; Misztal, S.; Luiz, T. EPICSAVE—Enhancing vocational training for paramedics with multi-user virtual reality. In Proceedings of the 2018 IEEE 6th International Conference on Serious Games and Applications for Health (SeGAH), Vienna, Austria, 16–18 May 2018; pp. 1–8. [Google Scholar]
  15. Kim, G.J. A SWOT analysis of the field of virtual reality rehabilitation and therapy. Presence 2005, 14, 119–146. [Google Scholar]
  16. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef]
  17. Livingston, M.A.; Rosenblum, L.J.; Julier, S.J.; Brown, D.; Baillot, Y.; Swan, J.E.; Gabbard, J.L.; Hix, D. An augmented reality system for military operations in urban terrain. In Proceedings of the Interservice/Industry Training, Simulation, and Education Conference, Orlando, FL, USA, 2–5 December 2002; Volume 89. [Google Scholar]
  18. Munzer, B.W.; Khan, M.M.; Shipman, B.; Mahajan, P. Augmented reality in emergency medicine: A scoping review. J. Med. Internet Res. 2019, 21, e12368. [Google Scholar] [CrossRef] [PubMed]
  19. Klopfer, E.; Squire, K. Environmental Detectives—The development of an augmented reality platform for environmental simulations. Educ. Technol. Res. Dev. 2008, 56, 203–228. [Google Scholar] [CrossRef]
  20. Rudnicka, Z.; Proniewska, K.; Perkins, M.; Pregowska, A. Cardiac Healthcare Digital Twins Supported by Artificial Intelligence-Based Algorithms and Extended Reality—A Systematic Review. Electronics 2024, 13, 866. [Google Scholar] [CrossRef]
  21. Chang, C.W.; Lin, C.W.; Huang, C.Y.; Hsu, C.W.; Sung, H.Y.; Cheng, S.F. Effectiveness of the virtual reality chemical disaster training program in emergency nurses: A quasi experimental study. Nurse Educ. Today 2022, 119, 105613. [Google Scholar] [CrossRef]
  22. Abich, J., IV; Parker, J.; Murphy, J.S.; Eudy, M. A review of the evidence for training effectiveness with virtual reality technology. Virtual Real. 2021, 25, 919–933. [Google Scholar] [CrossRef]
  23. Arthur, W., Jr.; Bennett, W., Jr.; Edens, P.S.; Bell, S.T. Effectiveness of training in organizations: A meta-analysis of design and evaluation features. J. Appl. Psychol. 2003, 88, 234. [Google Scholar] [CrossRef]
  24. Renganayagalu, S.K.; Mallam, S.C.; Nazir, S. Effectiveness of VR head mounted displays in professional training: A systematic review. Technol. Knowl. Learn. 2021, 26, 999–1041. [Google Scholar] [CrossRef]
  25. Strielkowski, W.; Grebennikova, V.; Lisovskiy, A.; Rakhimova, G.; Vasileva, T. AI-driven adaptive learning for sustainable educational transformation. Sustain. Dev. 2024, 33, 1921–1947. [Google Scholar] [CrossRef]
  26. Giessing, L. The potential of virtual reality for police training under stress: A SWOT analysis. In Interventions, Training, and Technologies for Improved Police Well-Being and Performance; IGI Global: Hershey, PA, USA, 2021; pp. 102–124. [Google Scholar]
  27. Van der Meijden, O.A.; Schijven, M.P. The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: A current review. Surg. Endosc. 2009, 23, 1180–1190. [Google Scholar] [CrossRef]
  28. Nguyen, Q.; Pretolesi, D.; Gallhuber, K. Collaborative scenario builder: A vr co-design tool for medical first responders. In Proceedings of the 2023 ACM Conference on Information Technology for Social Good, Lisbon, Portugal, 6–8 September 2023; pp. 342–350. [Google Scholar]
  29. Eggers, P.; Ward, A.; Ensmann, S. Augmented reality in paramedic training: A formative study. J. Form. Des. Learn. 2020, 4, 17–21. [Google Scholar] [CrossRef]
  30. Papakostas, C.; Troussas, C.; Sgouropoulou, C. Conclusions of AI-Driven AR in Education. In Special Topics in Artificial Intelligence and Augmented Reality: The Case of Spatial Intelligence Enhancement; Springer Nature: Cham, Switzerland, 2024; pp. 157–176. [Google Scholar]
  31. Våpenstad, C.; Hofstad, E.F.; Langø, T.; Mårvik, R.; Chmarra, M.K. Perceiving haptic feedback in virtual reality simulators. Surg. Endosc. 2013, 27, 2391–2397. [Google Scholar] [CrossRef] [PubMed]
  32. Koutitas, G.; Smith, S.; Lawrence, G. Performance evaluation of AR/VR training technologies for EMS first responders. Virtual Real. 2021, 25, 83–94. [Google Scholar] [CrossRef]
  33. Gong, P.; Lu, Y.; Lovreglio, R.; Lv, X.; Chi, Z. Applications and effectiveness of augmented reality in safety training: A systematic literature review and meta-analysis. Saf. Sci. 2024, 178, 106624. [Google Scholar] [CrossRef]
  34. Abbas, Y.; Martinetti, A.; Nizamis, K.; Spoolder, S.; van Dongen, L.A.M. Ready, trainer … one*! discovering the entanglement of adaptive learning with virtual reality in industrial training: A case study. Interact. Learn. Environ. 2021, 31, 3698–3727. [Google Scholar] [CrossRef]
  35. Birzer, M.L. Police training in the 21st century. FBI L. Enforc. Bull. 1999, 68, 16. [Google Scholar]
  36. Zaman, A.A.U.; Abdelaty, A.; Yamany, M.S.; Jacobs, F.; Marzouk, M. Integrating 3D photogrammetry and game engine for construction safety training. Built Environ. Proj. Asset Manag. 2025. [Google Scholar] [CrossRef]
  37. Ma, Y.; Lee, E.W.M.; Yuen, R.K.K. An artificial intelligence-based approach for simulating pedestrian movement. IEEE Trans. Intell. Transp. Syst. 2016, 17, 3159–3170. [Google Scholar] [CrossRef]
  38. PIX4Dcapture. Available online: https://www.pix4d.com/product/pix4dcapture/ (accessed on 17 February 2025).
  39. PIX4Dmapper. Available online: https://www.pix4d.com/discover-pix4dmapper/ (accessed on 17 February 2025).
  40. Pix4Dcapture and Pix4Dmapper—Overview. Available online: https://www.pix4d.com (accessed on 26 June 2025).
  41. AgiSoft Metashape. Available online: https://www.agisoftmetashape.com (accessed on 26 June 2025).
  42. RealityScan. Available online: https://rshelp.capturingreality.com (accessed on 26 June 2025).
  43. PIX4Dmatic. Available online: https://www.pix4d.com/discover-pix4dmatic/ (accessed on 17 February 2025).
  44. Unity NavMesh. Available online: https://docs.unity3d.com/530/Documentation/Manual/nav-BuildingNavMesh.html (accessed on 26 June 2025).
  45. Reynolds, C.W. Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA, 27–31 July 1987; pp. 25–34. [Google Scholar] [CrossRef]
  46. Lorusso, P.; De Iuliis, M.; Marasco, S.; Domaneschi, M.; Cimellaro, G.P.; Villa, V. Fire emergency evacuation from a school building using an evolutionary virtual reality platform. Buildings 2022, 12, 223. [Google Scholar] [CrossRef]
  47. Qazi, M.H.; Khan, F.; Kim, J.; Rojas-Munoz, E.J. Developing a VR-based Training Platform for Emergency Fire Handling Services Using Unity 3D. In Proceedings of the 2023 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 11–12 December 2023; pp. 102–107. [Google Scholar]
  48. Mystakidis, S.; Besharat, J.; Papantzikos, G.; Christopoulos, A.; Stylios, C.; Agorgianitis, S.; Tselentis, D. Design, development, and evaluation of a virtual reality serious game for school fire preparedness training. Educ. Sci. 2022, 12, 281. [Google Scholar] [CrossRef]
  49. Senthil, G.A.; Mathumitha, V.; Prabha, R.; Suganthi, S.; Alagarsamy, M. Simulation on natural disaster fire accident evacuation using augmented virtual reality. In International Conference on Information, Communication and Computing Technology; Springer Nature: Singapore, 2023; pp. 343–360. [Google Scholar]
  50. Calandra, D.; Pratticò, F.G.; Migliorini, M.; Verda, V.; Lamberti, F. multi-role, multi-user, multi-technology virtual reality-based road tunnel fire simulator for training purposes. In Proceedings of the 16th International Conference on Computer Graphics Theory and Applications (GRAPP 2021), Vienna, Austria, 8–10 February 2021; SciTePress: Setúbal, Portugal, 2021; Volume 1, pp. 96–105. [Google Scholar] [CrossRef]
  51. Kolb, A.Y.; Kolb, D.A. Learning styles and learning spaces: Enhancing experiential learning in higher education. Acad. Manag. Learn. Educ. 2005, 4, 193–212. [Google Scholar] [CrossRef]
  52. David, J.; Lobov, A.; Lanz, M. Learning experiences involving digital twins. In Proceedings of the IECON 2018—44th Annual Conference of the IEEE Industrial Electronics Society, Washington, DC, USA, 21–23 October 2018; pp. 3681–3686. [Google Scholar]
  53. Baden, M.S.; Major, C.H. Foundations of Problem-Based Learning; McGraw-Hill Education: London, UK, 2004. [Google Scholar]
  54. Cohen, D.; Sevdalis, N.; Taylor, D.; Kerr, K.; Heys, M.; Willett, K.; Batrick, N.; Darzi, A. Emergency preparedness in the 21st century: Training and preparation modules in virtual environments. Resuscitation 2013, 84, 78–84. [Google Scholar] [CrossRef] [PubMed]
  55. Chen, Y.; Wang, X.; Xu, H. Human factors/ergonomics evaluation for virtual reality headsets: A review. CCF Trans. Pervasive Comput. Interact. 2021, 3, 99–111. [Google Scholar] [CrossRef]
  56. Leder, R.; Laudan, M. Comparing a VR ship simulator using an HMD with a commercial ship handling simulator in a CAVE setup. In Proceedings of the 23rd International Conference on Harbour, Maritime & Multimodal Logistics Modeling & Simulation, Online, 15–17 September 2021. [Google Scholar]
  57. Wilkerson, W.; Avstreih, D.; Gruppen, L.; Beier, K.P.; Woolliscroft, J. Using immersive simulation for training first responders for mass casualty incidents. Acad. Emerg. Med. 2008, 15, 1152–1159. [Google Scholar] [CrossRef]
  58. Peng, K.; Moussavi, Z.; Karunakaran, K.D.; Borsook, D.; Lesage, F.; Nguyen, D.K. iVR-fNIRS: Studying brain functions in a fully immersive virtual environment. Neurophotonics 2024, 11, 020601. [Google Scholar] [CrossRef]
Figure 1. Proposed methodology.
Figure 1. Proposed methodology.
Computers 14 00274 g001
Figure 2. Flight path and examples of image acquisition.
Figure 2. Flight path and examples of image acquisition.
Computers 14 00274 g002
Figure 3. A sample of indicative sequential photographs from the reconstruction site.
Figure 3. A sample of indicative sequential photographs from the reconstruction site.
Computers 14 00274 g003
Figure 4. The reconstructed site.
Figure 4. The reconstructed site.
Computers 14 00274 g004
Figure 5. Common reconstruction error in planar surfaces.
Figure 5. Common reconstruction error in planar surfaces.
Computers 14 00274 g005
Figure 6. Cleaning and removing troublesome structures.
Figure 6. Cleaning and removing troublesome structures.
Computers 14 00274 g006
Figure 7. Enhancement using 3D objects.
Figure 7. Enhancement using 3D objects.
Computers 14 00274 g007
Figure 8. Mesh structure of the amphitheater.
Figure 8. Mesh structure of the amphitheater.
Computers 14 00274 g008
Figure 9. Screenshots from the amphitheater incident.
Figure 9. Screenshots from the amphitheater incident.
Computers 14 00274 g009
Figure 10. Screenshots from the car chase scene.
Figure 10. Screenshots from the car chase scene.
Computers 14 00274 g010
Figure 11. Screenshots from the execution of the third event scene.
Figure 11. Screenshots from the execution of the third event scene.
Computers 14 00274 g011
Figure 12. Screenshots from the execution of the fourth event scene.
Figure 12. Screenshots from the execution of the fourth event scene.
Computers 14 00274 g012
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Partarakis, N.; Evdaimon, T.; Katsantonis, M.; Zabulis, X. Training First Responders Through VR-Based Situated Digital Twins. Computers 2025, 14, 274. https://doi.org/10.3390/computers14070274

AMA Style

Partarakis N, Evdaimon T, Katsantonis M, Zabulis X. Training First Responders Through VR-Based Situated Digital Twins. Computers. 2025; 14(7):274. https://doi.org/10.3390/computers14070274

Chicago/Turabian Style

Partarakis, Nikolaos, Theodoros Evdaimon, Menelaos Katsantonis, and Xenophon Zabulis. 2025. "Training First Responders Through VR-Based Situated Digital Twins" Computers 14, no. 7: 274. https://doi.org/10.3390/computers14070274

APA Style

Partarakis, N., Evdaimon, T., Katsantonis, M., & Zabulis, X. (2025). Training First Responders Through VR-Based Situated Digital Twins. Computers, 14(7), 274. https://doi.org/10.3390/computers14070274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop