1. Introduction
Modern industrial societies often have critical areas where various cyber–physical systems (CPSs) from different stakeholders must collaborate to ensure intelligent and secure situational awareness operations. Currently, CPS systems typically operate within tightly coupled, company-specific silos without sharing information across stakeholder boundaries [
1,
2].
A port environment exemplifies this scenario, with numerous people, physical objects, and CPS systems (such as mobility, logistics, energy, and security) operating in parallel with company-specific digital twins. Multiple stakeholders have mobile physical assets within the port area, and each asset affects the safety of the other entities and people present. Thus, situational awareness that transcends stakeholder borders is crucial. Traditionally, public safety and security systems have been kept separate from other industrial systems due to confidentiality concerns. However, in the event of an accident, sharing information between the logistics, mobility, and security CPS systems becomes essential. Consequently, data sharing across sectorial boundaries can provide a clearer understanding of the situation and guide public authorities in taking appropriate actions when an accident occurs in the port area.
This research is motivated by the challenges that unexpected events, such as accidents, pose in these complex, multi-stakeholder environments. Simulating an accident in a real system would be costly and unsafe for both people and resources. Therefore, a simulation-based approach was chosen [
3]. We selected dataspaces as a solution to address data-sharing requirements in a multi-stakeholder port environment. A dataspace provides a secure, decentralized, interoperable, and trustworthy environment for stakeholders to share data and services under predefined rules and policies [
4,
5,
6,
7]. Previously, for example, Rødseth and Berre [
8] applied industrial dataspace (IDS) reference architecture to create a maritime dataspace for exchanging digital ship data among stakeholders. IDS was also utilized to develop a seaport dataspace addressing interoperability and data sovereignty [
9,
10]. While the literature provides examples of dataspaces improving information sharing in multi-stakeholder settings, there are no known examples of dataspaces used for both daily operations and real-time information sharing during accidents. This research aims to bridge this gap by developing a prototype demonstrating how dataspaces can facilitate data sharing in routine operations and enable real-time information sharing during accidents. To bridge this gap, we carried out a simulation-based study on securing data sharing for situational awareness in a port accident case. First, we studied the available concepts and realizations of dataspaces and studied the reference architectures of the International Data Spaces Association (IDSA
1) and the federated and secure data infrastructure framework (Gaia-X
2). Following the core principles provided by these entities, we developed prototype-level experimental solutions for dataspaces to enable secure data sharing. Verifiable credentials (VCs) are used to ensure that data consumers have the necessary access rights for the data shared by the data producers. Additionally, a 3D virtual digital twin model was developed to visualize situational awareness for people in the port. These solutions were evaluated in a simulation-based execution of an accident scenario, where a forklift catches fire while loading a docked ship in the port.
The rest of this paper is structured as follows:
Section 2 explains the research methods, challenges, and prior art.
Section 3 discusses simulation-based solutions for secure data sharing and digital twins for situational awareness.
Section 4 presents the evaluation of results, and
Section 5 offers concluding remarks.
2. Methods, Challenges, and Prior Art
The applied research methods for securing data sharing for situational awareness are clarified in this section, following an analysis of challenges in the focused port accident case related to a forklift fire during loading of a docked ship. After that, a discussion on prior art related to securing data sharing for situational awareness is provided.
2.1. Methodology of This Research
A view of the targeted research questions is represented here using the targeted critical port area as an example (
Figure 1). The port area typically consists of multiple critical cyber–physical systems (CPSs), each of which is operated by a specific service provider (SP), which then provides services for the port operator. The port authority needs to have a view of situational awareness (SA) over the complete port area. Such a situational awareness process typically consists of the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status [
11]. The perception is based on information exposed from physical devices such as sensors, cameras, actuators, vehicles, and work machines, which are referred to here as physical assets (PAs), typically managed by a specific SP. Each of the SPs has their own view of situational awareness, as depicted in
Figure 1. In addition, there can be several people of each SP operating in the area, and they may have different roles depending on the situation. The main research question is related to these kinds of situations, more specifically, to secure data sharing for enabling situational awareness of such a critical CPS area in unexpected accidents or attack-type events. When an event occurs, emergency service providers, such as rescue teams and police, must have real-time situational awareness of the critical port area. This is not possible without secure data-sharing capabilities between the stakeholders involved.
The applied research method is experimental and was carried out in step-by-step manner. First, the challenges of secure data sharing for situational awareness in a port accident case were analyzed. Next, an analysis of prior art solutions was carried out as a parallel process for development. Based on this information, a simulation-based approach was considered and prototyping of the potentially needed solutions for secure data sharing was started. The simulation was necessary due to the complexity, the number of stakeholders and physical assets, and the dynamic nature of the operation in the accident case. The prototyping of the solutions and critical parts were focused because it was not possible to implement full solutions or consider all the real entities for the study. In addition, causing and controlling an accident in a real system would be very expensive and unsafe for the people and resources.
2.2. Challenges of a Port Accident Case
The targeted port accident scenario involves a forklift catching fire while loading a docked ship. In such a situation, stakeholders in the area may include the port authority, security operators, ship brokers/agents, vessel traffic services, and emergency service providers like rescue command and control. The port authority is aware of the buildings and their storage status within the port area, monitors live camera feeds from fixed or mobile surveillance cameras, and has maintenance crews operating in the area. Security operators track the locations of employees and vehicles using access control data for the port area and have guards available to assist in evacuation situations. The port operator, responsible for loading and unloading ships, uses machinery such as cranes and forklifts and employs stevedores working on the ships. The ship broker/agent represents the ship operator during the port visit and manages the cargo content and ship personnel. Vessel traffic services inform ships and the port operator/authority about the situation and movements in the port area. Additionally, there can be piloting and transportation services, including trucks and trains.
The targeted scenario is referred to as the “forklift catching fire while loading a docked ship”.
Figure 2 illustrates the sequence of events, showing how and when stakeholders exchange data among themselves.
When a forklift catches fire while handling cargo in the ship’s cargo bay, the port operator raises the alarm (Emergency Level 1, EL1) and sends the information to a message hub.
The message hub validates the credentials of the port operator and, after successful validation, publishes the received message to all relevant subscribers, including the port authority, port security, and ship broker. Upon receiving the message, the security operator sends personnel to the accident site to evacuate the area and help extinguish the fire. The ship broker contacts the ship and shares important information, such as a list of crew members working near the accident site. The port authority begins monitoring the situation.
The port operator raises the emergency level from EL1 to EL2 and forwards this information to the message hub, which then disseminates it to the relevant subscribers. Upon receiving the latest information, all relevant stakeholders take various measures and prepare for the arrival of emergency services by preparing data (e.g., the location of their personnel and vehicles).
Rescue Command & Control (R C&C) arrives on the scene and assesses the situation. It receives required information from various stakeholders at the port, providing an up-to-date view of the situational awareness in the port. Assessing the severity of the fire, the R C&C raises the emergency level from EL2 to EL3 and sends this update to the message hub. Upon receiving the information about the emergency level raised, all relevant stakeholders grant required access to data and services to the R C&C.
As the fire intensifies, the R C&C evacuates the area within a certain range from the ship by raising the emergency level to EL4. The port authority updates the map with no-go zones. The port authority, along with other parties, starts sharing their people and vehicle location data with the security operator. The security operator assists in the evacuation and monitors the evacuation area via the location data of people and vehicles.
Firefighters bring the fire under control, and a wide evacuation is no longer necessary. R C&C lowers the emergency level to EL3. The port authority updates the map accordingly and removes location data access from the security operator. All other parties also remove location data access from the security operator.
Firefighters extinguish the fire. Water puddles, debris, and the smell of smoke remain in the area. R C&C lowers the emergency level to EL1. The port authority updates the map accordingly and removes camera and drone access from R C&C. All stakeholders remove location data access from R C&C.
After cleanup and ventilation, the port operator determines normal operation can resume and lowers the emergency level to EL0. All stakeholders return to normal operations.
The example port accident case highlights the need for secure data sharing to enable proper situational awareness for the stakeholders according to their role and the emergency level. When examining these requirements from a technical point of view, a set of specific technical preconditions can be identified. These include the need for trustable identities for stakeholders, people, and physical assets (requirement 1, R1); to control the access rights to information (e.g., locations, presence, and camera feeds) on which the situational awareness relies (R2); for granting rights to control certain objects (e.g., drones and camera directions) (R3); for displaying the area and its status on a map or respective visualization (R4); and for the ability to make changes to any of the mentioned access rights at any time during the operation in a controlled manner (R5). When any changes to the access rights are made, the changes are immediately adopted by all stakeholders (R6).
2.3. A Discussion on Prior Art
The discussion on prior art is divided into discussion on technologies related to situational awareness, data sharing, identity management, and digital twin technologies. After that, we overview the prior art in seaport context and summarize the novelty of our contribution.
2.3.1. Situational Awareness
In the information and communication technology domain, attaining situational awareness is a complex process, and all stages—perception, comprehension, and projection—require different sets of tools and technologies. Moreover, careful orchestration of all stages is important to understand the situation in a specific context.
Perception
At the perception stage, raw data are usually gathered from various sensors, systems, and devices [
12,
13]. When it comes to collecting a multitude of data from various sources, the Internet of Things (IoT) plays a key role [
14]. Data are shared between devices and systems using network technologies such as 2G/3G/4G/5G, Wi-Fi, Zigbee, Bluetooth, or radio [
14,
15]. The selection of network technology depends on factors such as device type, geographical location, and power consumption. In addition to network technologies, application protocols are required to format and prepare data to make them transportable. Commonly used application protocols include Advanced Message Queuing Protocol (AMQP), Constrained Application Protocol (CoAP), Message Queuing Telemetry Transport (MQTT), Extensible Message and Presence Protocol (XMPP), and Representational State Transfer (REST) [
14,
15]. The next stage involves data handling and processing, which is usually managed by a gateway or a message broker. The message broker ensures that all data packets coming from several sources are handled seamlessly for further processing and persistence.
Comprehension
In the comprehension stage, the data collected at the perception stage are further processed to clean up noise, handle missing data, and apply data fusion techniques to integrate data about the same aspect coming from various sources [
12,
16]. The processed data become information and create a holistic picture of a situation at a specific point in time and determine the significance of the data collected at the perception stage [
13]. The comprehension stage facilitates decision making and performing actions needed according to the current situation [
13,
16]. Data analysis and processing require a comprehensive platform that can handle a large amount of data. Some commonly used data analytics and processing systems include Apache Spark [
17], Apache Flink [
18], Apache Storm [
19], InfluxDB and Kapacitor [
20], Esper CEP engine [
21], Azure Stream Analytics [
22], and AWS IoT Analytics [
23].
Projection
The projection stage includes predicting future events or situations. It involves combining current and past data/information and transforming them into actionable knowledge [
13,
16]. Artificial intelligence (AI) plays a significant role at this stage. Several AI methods, frameworks, and platforms are available for utilization. Generally, the type of data and prediction scenario are the main driving factors in selecting appropriate AI tools. Some commonly known AI tools are Bayesian networks [
24], hidden Markov models (HMM) [
25], neural networks, and convolutional neural networks [
26]. Moreover, presenting the analyzed data in an easy-to-understand format is equally important for timely decision making. A carefully designed dashboard is an efficient data visualization tool enabling understanding of the results generated by the AI tools [
27]. There are several proprietary, freemium, and open-source solutions available to develop state-of-the-art dashboards by implementing advanced visualization techniques, for example, Microsoft Power BI [
28], Tableau, Sisense [
29], Apache Superset [
30], ArcGIS [
31], Google Data Studio [
32], D3-Data Driven Documents [
33], and Apache Echarts [
34].
2.3.2. Data Sharing
In the current age of technology, organizations generate and store large amounts of data. Combining data from different sources can offer opportunities for various innovations [
4,
5]. However, when it comes to data sharing, organizations tend to be reluctant since they would like to maintain control of their data [
5].
Electronic data interchange (EDI) is one way that companies and businesses can share data among themselves using a standard format. EDI facilitates business-to-business communication and replaces paper-based documents, such as purchase orders and invoices, with electronic documents. There are two fundamental ways in which data are exchanged between two entites. The first is by using a point-to-point connection between two dedicated machines or computer systems, and the second is by using a value-added network (VAN), which is often provided by a third party. These approaches enable the secure exchange of data between organizations. While EDI provides an efficient way to exchange data between organizations, it is not as flexible as using the REST (representational state transfer) protocol [
35,
36,
37].
REST is used to design and develop RESTful application programming interfaces (APIs), which allow businesses to exchange heterogeneous data with end-user applications or other businesses. The RESTful APIs offer different endpoints that can be invoked to perform data transactions, such as sending new data and updating or deleting existing data. The JSON (Javascript Object Notion) format has become the industry standard for exchanging data. A JSON document is a popular choice because it is not only machine-processeable but also human-readable [
38].
In the IoT domain, entities or organizations use different network and application protocols to transmit data among various applications. When it comes to sharing data with other organizations, it is often challenging due to incompatiable data formats and standards. The heterogeneity of data makes it difficult to share data among different organizations and across various domains. In such a scenario, the RESTful APIs facilitate the creation of an abstract layer or a web service that enables interoperability by homogenizing the data formats and allowing services to exchange data efficiently and securely. Thus, RESTful APIs enable the creation of the Web of Things (WoT), where applications and services exchange data in a standardized way without taking into account the underlying technologies that are used to generate data [
39].
One web service usually handles a specific business domain, and when there is a need to access multiple web services to accomplish a task, it often becomes difficult to keep track of and document which web service provides what data. In this case, a microservice architecture becomes vital, in which monolithic services and applications are divided into smaller ones [
40]. A centralized gateway service handles all incoming requests and redirects them to a specific web service that can provide the requested data or process the information. Thus, client applications only communicate with the gateway service. Although a microservice architecture handles many complexities, it becomes challenging to maintain the security of complex distributed and interconnected services and more security arrangements are required to keep all microservices secured [
41,
42].
The International Data Spaces Association (IDSA) has developed a reference architecture to facilitate the implementation of dataspaces for sovereign data exchange. Moreover, Gaia-X is another initiative that defines a detailed reference framework for dataspaces. In comparison to IDSA, Gaia-X took a broader approach and emphasizes not only sovereign data exchange but also provides guidelines for data storage on cloud platforms [
4]. In these approaches, a dataspace facilitates data sharing among data producers and consumers in a sovereign manner. It provides a secure, decentralized, interoperable, and trustworthy environment for participants to share data and services under predefined rules and policies [
4,
5,
6,
7]. A typical dataspace consists of the following core components (
Figure 3):
Dataspace governance rules and policies [
6]: every participant in a dataspace should adhere to those rules and policies.
Common schema to describe participants and their offerings (self-descriptions) [
6]: this ensures interoperability.
Use of verifiable credentials to identify, verify, and authorize participants and their offerings [
6,
7].
Federation services: to provide authentication and authorization solutions, software components for data exchange, logging service, and federated catalog of data offerings available from dataspace participants [
7].
Trust anchors [
6,
7]: As a part of dataspace governance, members jointly agree on a set of external authorities that are considered trustworthy. Thus, only those participants, data, or services which are recognized by one of the defined trust anchors are allowed to become part of a dataspace.
The provider and consumer share data in a peer-to-peer manner under the governance of the dataspace ecosystem.
Figure 4 depicts a typical data-sharing scenario. The provider can define a data usage policy (which defines obligations and prohibitions), and the consumer can define a data search policy (e.g., search only specific types of data provided by specific providers). The connectors at both ends are responsible for enabling actual data sharing and ensuring automated enforcement of policies. The logging functionality, which is usually provided by the federation service records, notes what is being shared, with whom, and under which conditions. The logging records can be used for conflict resolution between the data producer and consumer.
2.3.3. Identity Management Technologies
A flexible and secure verifiable credentials service is a vital component to ensure sovereign data exchange and protect the identities of stakeholders, services, and service providers. Therefore, a comparison was conducted to find out which identity infrastructure suited our scenario best. A review and comparison of different identity infrastructures are shown in the
Table 1 and
Table 2. The comparison focused on the technical viewpoint, comparing, for example, programming language support and a wide range of features. After the comparison, we found that the following two criteria are the most important ones considering our case:
Functionality to issue and verify credentials not only manually by using a web application but also in an automated manner by using web APIs.
Functionality to store issued credentials in a wallet. The wallet can be accessed not only by some applications but also through web APIs. Therefore, application services can store and present credentials without any human intervention enabling independent service-to-service communication.
At first, we explored the offerings of the Microsoft Entra Verified ID Service, available in the Microsoft Azure ecosystem as a cloud service. The service meets the general criteria for issuing, verifying, presenting, and storing verifiable credentials. However, it is focused on meeting the needs of end users and does not have web APIs for tasks related to verifiable credentials. Since the service cannot be used in an automated manner and always requires manual human interaction, the Microsoft Entra Verified ID was not selected for use in our prototype [
43].
The second option we explored was an open-source solution called walt.id. Although this service meets our two main criteria, it requires setting up a server with specific installations, configurations, and databases necessary for the walt.id service. We decided not to proceed with this solution due to the complexities involved in setting up a dedicated server environment for the walt.id, which was not possible within the limited timeframe we had to develop a full prototype [
44,
45].
Finally, we evaluated the Trinsic platform, which meets our criteria in every aspect and is also well matched with our development environment. The Trinsic ecosystem provides software development kits (SDKs) for various development environments, from which the Trinsic NET SDK was selected. Although, Trinsic is a paid cloud service, they offer a free plan that allows you to create an ecosystem with ten wallets. Since our prototype does not require hundreds of wallets, we selected the Trinsic free plan to develop the verifiable credentials service and integrated it into our prototype [
46].
In their conference paper “A Study of Lightweight DDDAS Architecture for Real-Time Public Safety Applications Through Hybrid Simulation”, Blasch et al. (2019) discussed implementing a blockchain-enabled security service to maintain secure identity management and secured data access and sharing [
47]. This service uses a virtual ID (VID) for each entity on the blockchain, with smart contracts facilitating identity authentication and access control policies [
47]. Their approach emphasizes a permissioned blockchain network to ensure higher efficiency and security by limiting participants and clearly defining security policies.
Another solution to identity management in a similar context to our case was introduced by Taruly Saragih et al. (2019) in their conference paper “The Use of Blockchain for Digital Identity Management in Healthcare” [
48]. Their approach to securing identity management and access control in such a critical environment also uses blockchain solutions as a foundation but harnesses the power of self-sovereign identity (SSI) [
48] to make the identity management process more efficient.
Even though Trinsic as a solution is also blockchain-based, our implementation of secure identity management and secured data access and sharing for a hybrid simulation is different. Trinsic as an ecosystem enables the usage of verifiable credentials (VCs), wallets, and decentralized identifiers (DIDs) [
46] for a more future-proof, universal, and standardized approach to secure identity management and access control. Unlike the method described by Blasch et al., which relies on separate microservices for security tasks [
47], Trinsic’s method integrates identity management and data sharing more seamlessly. Though Trinsic also follows SSI principles, it manages all the needed identity management tools in a singular ecosystem, thus making the process more efficient, cost-efficient, and manageable. This makes our approach different in terms of previous identity management solutions in hybrid simulation scenarios or in similar vital environments.
2.3.4. Digital Twin Technologies
Modern digital twin (DT) solutions have become important tools in various industries, offering a sophisticated approach to understanding, monitoring, and optimizing physical entities and processes. These solutions leverage real-time data from sensors and IoT devices to create virtual replicas that closely mirror their real-world counterparts. One significant benefit lies in their ability to simulate complex scenarios within the virtual space, allowing organizations to test and refine processes without real-world risks. This capability is particularly impactful in manufacturing, healthcare, smart cities, aerospace, and other sectors.
One of the advantages of modern digital twin solutions is their contribution to informed decision making. By providing a detailed representation of physical systems, organizations can gain insights into performance, efficiency, and potential challenges. This informed decision making extends across various stages, from the initial design phase to ongoing operations and maintenance. Additionally, digital twins enable a proactive approach to problem solving by predicting issues before they occur, reducing downtime, improving reliability, and optimizing resource utilization.
Digital twins are also useful for improving operational efficiency. Organizations can use virtual replicas to identify bottlenecks, streamline processes, and enhance overall productivity. In manufacturing, for example, digital twins support predictive maintenance, ensuring that equipment is serviced precisely when needed, minimizing disruptions, and extending the lifespan of machinery. This efficiency extends to other sectors as well, contributing to cost savings and resource optimization.
Another advantage of modern digital twin solutions is their role in fostering innovation. By providing a platform for continuous testing and refinement, organizations can experiment with new ideas and technologies in a controlled environment. This accelerates the innovation cycle and encourages a culture of continuous improvement. In the context of smart cities, for instance, digital twins allow urban planners to explore and implement innovative solutions for infrastructure, transportation, and energy management.
Overall, modern digital twin solutions empower organizations to adapt to an increasingly complex and dynamic world. From improving decision making and operational efficiency to fostering innovation, the benefits are diverse and far-reaching. As technology continues to advance, the potential applications of digital twins are likely to expand, contributing to increased sustainability, resilience, and competitiveness across various industries.
When discussing realizations of the digital twins, for example, 3D-type models have been designedto help visualize the real world [
49]. In this research, we utilized the Unity platform, because it offers a robust platform for efficient simulation development in a 3D way. However, users typically would also benefit from having traditional two-dimensional supervisory views, e.g., to location of objects, movements, and data. For example, a dashboard to support urban mobility management and decision making has been developed [
50]. The dashboard presents diverse spatial data by combining data collected by IoT sensors and external data services. The dashboard frontend was implemented with TypeScript-based Angular framework, and the backend was implemented with Java Spring framework, PostgreSQL database with PostGIS extension, and GeoServer for geospatial data processes. In our work, the dashboard is intended to provide situational awareness of the port area by presenting location data of entities in the area; thus, there is no need for such complex spatial data processing. Another example has been described in [
51], where a dashboard for supporting communications between emergency first response personnel in real time across multiple incident scenes. The frontend is implemented by using Bootstrap framework and HTML 5, and the backend is implemented with JavaScript, MySQL database, AJAX, PHP, and jQuery. We applied the same type of approach and have a real-time updated map with markers and information linked to them. It functions as a web-based application for monitoring and visualization of cooperative missions of unmanned vehicles named Navigator [
52]. The application combines and presents environmental, geographical, and navigational data needed for autonomous fleet monitoring during for research purposes.
2.3.5. Prior Art in Seaports Context
Seaports are rapidly evolving into smart ports [
9], with cyber–physical systems (CPSs) playing a crucial role in automating seaports by linking physical devices with digital systems [
10]. This transformation introduces new challenges, including the coordination of information sharing within the complex, multi-stakeholder port environment, especially during accidents [
53,
54]. Various barriers impede information sharing, such as organizational silos, privacy issues, security concerns, trust, system complexity, cost, data quality, and data sovereignty [
8,
9,
10,
53,
54,
55].
The concept of dataspace is emerging as a solution to these challenges in the multi-stakeholder environment. Rødseth and Berre [
8] applied the principles of the industrial dataspace (IDS) reference architecture to create a maritime dataspace for exchanging digital ship data among stakeholders. Similarly, Sarabia-Jácome et al. [
9,
10] utilized IDS to develop a seaport dataspace addressing interoperability and data sovereignty issues. Olesen et al. [
55] suggested that a specialized shared ICT system could help mitigate collaboration issues within the port environment. While the literature provides examples of dataspaces improving information sharing in multi-stakeholder settings, there are no known examples (to the authors’ knowledge) of dataspaces used for both daily operations and information sharing during accidents. This research aims to bridge this gap by developing a prototype demonstrating how dataspaces can facilitate data sharing in routine operations and enable online information sharing during accidents.
4. Evaluation Results
4.1. Simulation Set-Up for the Evaluation
A view of the simulation set-up developed for validation is depicted in
Figure 23. To simulate the port accident scenario (see
Section 2.2), a dataspace is developed by implementing simulated systems for the five core stakeholders, which are port authority, port operator, port security, ship broker, and emergency services—rescue command and control (see
Section 3.2). Moreover, dataspace federation services are implemented to handle identity service, member registry service, data catalog, and message broker hub for sharing data among the stakeholders and digital twin services.
4.2. Evaluation of the Dataspace Solution
Multiple applications were developed in order to create the prototype of a dataspace by considering the main stakeholders according to the port accident case. All core components were developed using the Microsoft C# programming language and .Net 6 framework. The combination of technologies enabled the development of a robust, fast, and secure application. Moreover, open-source solution PostgreSQL was selected to facilitate the development of an advanced relational database with high data consistency and integrity. All selected technologies proved to fulfil the aimed purposes.
The realized dataspace federation service is the dataspace membership registry, a verifiable credentials service, a compliance service, a federated data catalog service, and a message broker service. No logging service was implemented because of time constraints.
The dataspaces solution developed for realization of the data sharing between stakeholders proved to work quite well. However, the stakeholders and their data shared were simulated and no real service systems were included. In addition, any essential simulations for the situational awareness processes of the stakeholders were not possible to realize because of the time constraints. In addition, the analysis of security of the information processing within the dataspace is still mostly an issue for future research.
The solutions for identity management were created relying on the use of verifiable credentials and wallet applications. After the analysis of various identity infrastructures, Trinsic technology was selected as the basis of the solution. The solution enabled the use of digital wallets and verifiable credentials within the dataspace, making it possible for us to create a secure and efficient environment for data sharing and a good foundation for secure identity management. Configuring the Trinsic ecosystem gave us possibilities to authorize users to participate in the dataspace and use specific actions using verifiable credentials. One key advantage of Trinsic is its user-centric approach, allowing individuals to manage and control the information shared with the dataspace themselves. This ensures a personalized and privacy-focused experience for our users. Configuring the Trinsic ecosystem proved to be an efficient solution for our dataspace implementation. Using verifiable credentials as the authorization method provided the necessary flexibility to have efficient control over user access as well as controlling the usage of specific actions inside the dataspace.
As we recognized the need for an authorization mechanism across multiple applications within the dataspace, we integrated OpenID Connect (OIDC) as the preferred method for both logging in and authorizing users as Trinsic provided the needed support for the implementation. This integration enhances efficiency of the authorization and adds a lot to the secure user experience. OIDC securely manages logins of the users and their authorization to the applications which ensures a seamless and secure authorization process across all the applications. A sample dataspace membership-related verifiable credential within the OIDC log in endpoint is depicted in
Figure 24.
4.3. Evaluation of the Digital Twin Solutions
Unity was chosen as the development platform of our simulation environment due to its versatility and capability to provide all essential features required in the development of the scenario. In addition, it also has an extensive set of library assets available in the Unity Asset Store and this proved to be very valuable during the prototyping process. The use of these packages allowed for time savings and added further functionality into the simulation environment with small efforts. For example, controlling the player camera or import a fully functional third-person character controller from the Unity registry is enabled. In addition, by choosing Unity, we were able to access Unity Game Services which were a great help during the development process, namely, access to Unity-compatible version control and a built-in networking solution for prototyping a possible multiplayer experience.
In the end, the virtual simulation environment, despite its relatively basic features, effectively served as a viable proof of concept within the confines of our project’s time and resource limitations. While the result appeared minimalistic regarding the fidelity of the environment and user experience, it satisfactorily demonstrated the core functionalities and feasibility of the envisioned concept. Despite its barebones nature, the simulation reliably executed its intended tasks of altering data access between stakeholders according to the emergency level and accurately sharing location data with the dashboard, highlighting the fundamental mechanics and interactions envisioned for the final product. Considering the constraints faced during development, the simulation’s ability to function as intended underscored its success as a foundational prototype, validating the concept’s viability and serving as a springboard for potential future development.
Although the current iteration of the environment operates independently from the broader intended dataspace and the stakeholders within, integration of the simulator with the dedicated dashboard application was also tested during development. This was performed to establish the potential for seamless integration between the simulation environment and the wider, overarching dataspace. Despite their initial disconnect, the successful connection with the dashboard showcased the future possibilities for a cohesive ecosystem, where real-time data and simulations could converge, fostering a more comprehensive and interconnected system in line with our project objectives.
During the earlier stages of development, testing of possible multiplayer functionalities was also successfully conducted, demonstrating the capacity for multiple clients to interact within the simulation environment. This was performed by using Unity’s built in NetcodeForGameObjectspackage in connection with Unity Game Services infrastructure. However, for the sake of expediency and simplification in the final phases, the solution underwent testing focused solely on a single client. This decision allowed for a more streamlined evaluation process, ensuring that core functionalities and the essential mechanics of the simulation remained stable and functional. While the final testing concentrated on a singular client set-up, the successful earlier tests with multiple clients provided confidence in the scalability and potential for future expansion toward a multiplayer-oriented environment.
Overall, we feel that the simulation environment opens intriguing possibilities for training rescue personnel. Its integration with an extensive dataspace could allow for both fully virtual and hybrid-style exercises, offering a spectrum of training scenarios for rescue workers and overall crisis control. The platform’s potential lies in its capacity to simulate realistic situations, providing a controlled space for skill development and decision-making.
The supervisory dashboard solution was implemented using ASP.NET technology. The creation of a new project with it was fast and effortless, and it was easy to start building a new web app on top of the sample application. In this framework, web pages are created with HTML and CSS, and scripts are written in C#. Different ways to import data from the simulation into the dashboard were prototyped. In one prototype the dashboard application had a database in which the simulation added objects and their data via an API. The dashboard then read the data from the database and drew items on the map. There were problems with reading the updated data from the database in real time, so we looked for a different solution. We ended up sending the data from the simulation to the dashboard with SignalR. It is part of the ASP.NET framework, so no third-party libraries were required, and the implementation was straightforward. SignalR enabled real-time data updates from the simulation to the dashboard, which is an essential feature in this kind of application. GeoBlazor (version 2.3.3) was a perfect solution for the map creation needs in this project. GeoBlazor has a free version and a paid premium version (currently in beta), but the free version had all the features needed in this application. It was easy to adapt as the documentation and samples were excellent. GeoBlazor is still under active development and contains some faults and bugs. Luckily, no application-breaking bugs were encountered during the development. Logging in the dashboard was implemented using Serilog (version 3.4.1). It is a feature-rich, open-source logging library for applications written in C#. During this work, logging did not play a significant role and was mostly used for debugging.
In the dashboard, points of further development are improvements and optimization in data transfer from the simulation into the dashboard. More data can be gathered from a variety of sources (sensors, drones, external services, etc.) and presented in an appropriate way. If the amount of data grows and requires more processing before visualization, new backend services can also be introduced. Logging can also be improved and used for security monitoring. For example, the logger could monitor what data are accessed or requested by which user or IP address.
4.4. Evaluation against the Challenges of the Port Accident Case
The targeted port accident case is related to the situation where a forklift catches fire while loading a docked ship. The case highlights the need for secure information sharing to enable proper situational awareness for the stakeholders according to their role and the emergency level. When looking at these needs from a technical point of view, a set of specific technical requirements was detected and described in
Section 2.2. Let us discuss the achieved results against these identified challenges and needs as follows:
R1—Need to have trustworthy identities for stakeholders, people, and physical assets: The provided solution applies Trinsic and OpenID Connect technology for creating a trustworthy basis for identification. Trinsic enabled the use of digital wallets and verifiable credentials and OpenID Connect for logging in and authorizing users. Based on the evaluation, these technologies proved to provide a good basis for enabling secure identity management.
R2—Need to control the access rights to information (e.g., locations, presence, and camera feeds) on which the situational awareness relies: In the provided solution, the access rights are always controlled by the participating entity. First, the entity registers into the dataspace to obtain membership. Then, the data to be shared are offered in the federated dataspace catalog. Finally, the data contract between the provider and consumer is concluded. These steps must be taken before a data exchange can be started, and access credentials will be checked before the actual data exchange can happen.
R3—Need to give rights to control some objects (e.g., drones and camera directions): Some provided solutions related to information sharing could also be applied for help in giving rights to control some objects. However, such situations were not specifically validated as part of this work.
R4—Need to present the area and its status on a map or respective visualization: The Unity-based digital twin and solutions for supervisory dashboard can be applied for showing the port area and its status on a map.
R5—Need to make changes to any of the referred access rights at any time during the operation in a controlled manner: The data contract must always be concluded before any data sharing can happen. In data sharing, the provider checks data access and usage policies and determines if the consumer still meets the criteria to access the requested data. If everything is fine, the requested data are transferred to the consumer. Otherwise, no data are delivered. If there are changes in the access rights, the data are not delivered to the consumer even if requested.
R6—When any changes to the access rights are made, the changes will be immediately implemented by all the stakeholders: The changes can be made at any time as described in the R5 evaluation. However, if the data have been delivered beforehand, use of the data cannot be prevented.
4.5. A Discussion
We started by defining the challenges and requirements of a port accident case and studied the critical port system by using a simulation-based approach. The developed simulation system consisted of the main stakeholders that need to be active in the port especially during an accident: port authority, port operator, port security, ship broker, and emergency services—rescue command and control.
The realized solutions considered dataspace federation services to handle member registrations, identities and verifiable credentials, data catalog, message broker for sharing data among the stakeholders, and digital twin services for situational awareness. The provided dataspace solution developed for realization of secure data sharing between stakeholders to enable situational awareness worked quite well. However, the stakeholders and their shared data were simulated, without the need for any real service systems. Verifiable credentials as the authorization method provided the necessary flexibility to have efficient control over user access as well as controlling the usage of specific actions inside the dataspace. The integration of OpenID Connect enhanced the efficiency of the authorization process and added a lot to the secure user experience.
The application of the Unity 3D virtual model of the port proved to be a viable base for visualizing the situation for a user. The minimalistic port model was eventually successfully connected to the geolocating dashboard, the dataspace, and the sharing services. In addition, the possibility for multiplayer capabilities was also tested, demonstrating the capacity for multiple clients to interact within the simulation environment. Based on these evaluations, it is estimated that the provided simulation system opens possibilities for building a system for training rescue personnel to be able to act properly in port accident cases. However, tests with end users have not been carried out, and therefore the evaluation of the quality of the end user experience is still a topic of future research.
The port accident case sets several requirements toward the applied technologies. Based on the evaluation, the Trinsic ecosystem together with OpenID Connect proved to be a good foundation for secure identity and verifiable credentials management. The integration of these technologies with dataspace solutions facilitated the management of access rights to information. This ensured reliable data sharing among entities belonging to various stakeholders. It was also observed that the Unity-based digital twin with geolocating dashboard established a good basis for situational awareness in the port area for multiple players.
The system has not yet been evaluated in a real-world scenario involving actual end users. However, it has been designed to easily integrate with the existing systems of port stakeholders. Each stakeholder is responsible for managing their own database, meaning that data availability relies on the individual stakeholders. If one stakeholder’s system experiences failure, it does not impact other stakeholders’ ability to share data. For real-time data sharing, the message broker hub ensures that data generated by stakeholders are almost immediately available to others. The hub also monitors the systems connected to it and sends notifications if a stakeholder is unable to send or receive data, informing others about the unavailability of specific data so they can adapt accordingly.
The federated data catalog within the dataspace enhances data accuracy and integrity by requiring stakeholders to provide complete details about the data they offer, such as the data format and field definitions. When data are shared, stakeholders must adhere to the agreed data format established in the data contract.
The system minimizes manual intervention by automating many processes. For instance, once a stakeholder submits the necessary information for dataspace membership, the system automatically conducts verification checks, issues verifiable credentials, and records the membership in the registry. Similarly, when a stakeholder requests a data contract with another stakeholder, the system handles the entire process after the initial request is made through the federated data catalog.
The simulation-based approach worked quite well, in that way it was possible to conduct the study despite the complexity, cost, and safety issues. It was possible to execute the accident case where a forklift catches fire while loading a docked ship in a port environment in a controlled manner. The secure data sharing between the simulated port stakeholders’ systems and showing the situation in the 3D digital twin were made possible and shown in a limited context. The limitations were related to the realization of the dataspace solution, Unity-based digital twin, and supervisory dashboard so that only the basic capabilities were realized. In addition, data interactions were simplified so that only the basic concept was tested. Therefore, essential needs for future R&D were also identified. These findings are related to perception, comprehension, projection, trust, and security and enriching the viewpoints toward the situational awareness of people, physical assets, and digital twins of the participating CPS systems.
5. Concluding Remarks
The port environment is inherently a complex context consisting of multiple stakeholders, people, and physical CPS assets operating in the area. The complexity escalates during an accident, as the rescue command and control must operate efficiently. In this research, the simulation-based approach was selected as the basis for the study to handle the complexity efficiently and to solve the cost and safety challenges.
During the work, solutions related to dataspaces for secure data sharing and visualization of situational awareness were developed. The secure data-sharing solution relies on the application of verifiable credentials (VCs) to ensure that data consumers have the required access rights to the data/information shared by the data providers. A three-dimensional virtual digital twin model is applied for visualization of the situational awareness for people in the port.
The solutions were evaluated in a simulation-based execution of an accident scenario where a forklift catches fire while loading a docked ship in a port environment. The secure data sharing between the systems of simulated port stakeholders as well as the representation of the situation in the 3D digital twin were successfully achieved, though in a limited manner. The limitations were related to the realizations such that only fundamental capabilities were realized, and data interactions were simplified. However, the fundamental capabilities provided enough insight, and the concept was evaluated. The simulation-based approach and the solutions demonstrated their practicality, enabling a smooth study of a disaster-like scenario. Thus, we successfully applied the dataspaces for both daily routine operations and information sharing during accidents in a simulation-based environment.
During the evaluation, needs for future research related to perception, comprehension, projection, trust, and security as well as performance and quality of experience were detected. Especially, distributed and secure viewpoints of objects and stakeholders toward real-time situational awareness seem to require further studies.