Next Article in Journal
Generative AI Assertions in UVM-Based System Verilog Functional Verification
Previous Article in Journal
An Approach for Multi-Item Product Sales Forecasting Based on Advancing the BCG Matrix with Matrix-Clustering and Time Modeling Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Simulation-Based Study on Securing Data Sharing for Situational Awareness in a Port Accident Case

VTT Technical Research Centre of Finland, Kaitoväylä 1, 90570 Oulu, Finland
*
Author to whom correspondence should be addressed.
Systems 2024, 12(10), 389; https://doi.org/10.3390/systems12100389
Submission received: 28 May 2024 / Revised: 13 September 2024 / Accepted: 19 September 2024 / Published: 25 September 2024

Abstract

:
The cyber–physical systems (CPSs) of various stakeholders from the mobility, logistics, and security sectors are needed to enable smart and secure situational awareness operations in a port environment. The motivation for this research arises from the challenges caused by some unexpected events, such as accidents, in such a multi-stakeholder critical environment. Due to the scale, complexity, and cost and safety challenges, a simulation-based approach was selected as the basis for the study. Prototype-level experimental solutions for dataspaces for secure data sharing and visualization of situational awareness were developed. The secure data-sharing solution relies on the application of verifiable credentials (VCs) to ensure that data consumers have the required access rights to the data/information shared by the data prosumer. A 3D virtual digital twin model is applied for visualizing situational awareness for people in the port. The solutions were evaluated in a simulation-based execution of an accident scenario where a forklift catches fire while loading a docked ship in a port environment. The simulation-based approach and the provided solutions proved to be practical and enabled the smooth study of disaster-type situations. The realized concept of dataspaces is successfully applied here for both daily routine operations and information sharing during accidents in the simulation-based environment. During the evaluation, needs for future research related to perception, comprehension, projection, trust, and security as well as performance and quality of experience were detected. Especially, distributed and secure viewpoints of objects and stakeholders toward real-time situational awareness seem to require further studies.

1. Introduction

Modern industrial societies often have critical areas where various cyber–physical systems (CPSs) from different stakeholders must collaborate to ensure intelligent and secure situational awareness operations. Currently, CPS systems typically operate within tightly coupled, company-specific silos without sharing information across stakeholder boundaries [1,2].
A port environment exemplifies this scenario, with numerous people, physical objects, and CPS systems (such as mobility, logistics, energy, and security) operating in parallel with company-specific digital twins. Multiple stakeholders have mobile physical assets within the port area, and each asset affects the safety of the other entities and people present. Thus, situational awareness that transcends stakeholder borders is crucial. Traditionally, public safety and security systems have been kept separate from other industrial systems due to confidentiality concerns. However, in the event of an accident, sharing information between the logistics, mobility, and security CPS systems becomes essential. Consequently, data sharing across sectorial boundaries can provide a clearer understanding of the situation and guide public authorities in taking appropriate actions when an accident occurs in the port area.
This research is motivated by the challenges that unexpected events, such as accidents, pose in these complex, multi-stakeholder environments. Simulating an accident in a real system would be costly and unsafe for both people and resources. Therefore, a simulation-based approach was chosen [3]. We selected dataspaces as a solution to address data-sharing requirements in a multi-stakeholder port environment. A dataspace provides a secure, decentralized, interoperable, and trustworthy environment for stakeholders to share data and services under predefined rules and policies [4,5,6,7]. Previously, for example, Rødseth and Berre [8] applied industrial dataspace (IDS) reference architecture to create a maritime dataspace for exchanging digital ship data among stakeholders. IDS was also utilized to develop a seaport dataspace addressing interoperability and data sovereignty [9,10]. While the literature provides examples of dataspaces improving information sharing in multi-stakeholder settings, there are no known examples of dataspaces used for both daily operations and real-time information sharing during accidents. This research aims to bridge this gap by developing a prototype demonstrating how dataspaces can facilitate data sharing in routine operations and enable real-time information sharing during accidents. To bridge this gap, we carried out a simulation-based study on securing data sharing for situational awareness in a port accident case. First, we studied the available concepts and realizations of dataspaces and studied the reference architectures of the International Data Spaces Association (IDSA1) and the federated and secure data infrastructure framework (Gaia-X2). Following the core principles provided by these entities, we developed prototype-level experimental solutions for dataspaces to enable secure data sharing. Verifiable credentials (VCs) are used to ensure that data consumers have the necessary access rights for the data shared by the data producers. Additionally, a 3D virtual digital twin model was developed to visualize situational awareness for people in the port. These solutions were evaluated in a simulation-based execution of an accident scenario, where a forklift catches fire while loading a docked ship in the port.
The rest of this paper is structured as follows: Section 2 explains the research methods, challenges, and prior art. Section 3 discusses simulation-based solutions for secure data sharing and digital twins for situational awareness. Section 4 presents the evaluation of results, and Section 5 offers concluding remarks.

2. Methods, Challenges, and Prior Art

The applied research methods for securing data sharing for situational awareness are clarified in this section, following an analysis of challenges in the focused port accident case related to a forklift fire during loading of a docked ship. After that, a discussion on prior art related to securing data sharing for situational awareness is provided.

2.1. Methodology of This Research

A view of the targeted research questions is represented here using the targeted critical port area as an example (Figure 1). The port area typically consists of multiple critical cyber–physical systems (CPSs), each of which is operated by a specific service provider (SP), which then provides services for the port operator. The port authority needs to have a view of situational awareness (SA) over the complete port area. Such a situational awareness process typically consists of the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status [11]. The perception is based on information exposed from physical devices such as sensors, cameras, actuators, vehicles, and work machines, which are referred to here as physical assets (PAs), typically managed by a specific SP. Each of the SPs has their own view of situational awareness, as depicted in Figure 1. In addition, there can be several people of each SP operating in the area, and they may have different roles depending on the situation. The main research question is related to these kinds of situations, more specifically, to secure data sharing for enabling situational awareness of such a critical CPS area in unexpected accidents or attack-type events. When an event occurs, emergency service providers, such as rescue teams and police, must have real-time situational awareness of the critical port area. This is not possible without secure data-sharing capabilities between the stakeholders involved.
The applied research method is experimental and was carried out in step-by-step manner. First, the challenges of secure data sharing for situational awareness in a port accident case were analyzed. Next, an analysis of prior art solutions was carried out as a parallel process for development. Based on this information, a simulation-based approach was considered and prototyping of the potentially needed solutions for secure data sharing was started. The simulation was necessary due to the complexity, the number of stakeholders and physical assets, and the dynamic nature of the operation in the accident case. The prototyping of the solutions and critical parts were focused because it was not possible to implement full solutions or consider all the real entities for the study. In addition, causing and controlling an accident in a real system would be very expensive and unsafe for the people and resources.

2.2. Challenges of a Port Accident Case

The targeted port accident scenario involves a forklift catching fire while loading a docked ship. In such a situation, stakeholders in the area may include the port authority, security operators, ship brokers/agents, vessel traffic services, and emergency service providers like rescue command and control. The port authority is aware of the buildings and their storage status within the port area, monitors live camera feeds from fixed or mobile surveillance cameras, and has maintenance crews operating in the area. Security operators track the locations of employees and vehicles using access control data for the port area and have guards available to assist in evacuation situations. The port operator, responsible for loading and unloading ships, uses machinery such as cranes and forklifts and employs stevedores working on the ships. The ship broker/agent represents the ship operator during the port visit and manages the cargo content and ship personnel. Vessel traffic services inform ships and the port operator/authority about the situation and movements in the port area. Additionally, there can be piloting and transportation services, including trucks and trains.
The targeted scenario is referred to as the “forklift catching fire while loading a docked ship”. Figure 2 illustrates the sequence of events, showing how and when stakeholders exchange data among themselves.
  • When a forklift catches fire while handling cargo in the ship’s cargo bay, the port operator raises the alarm (Emergency Level 1, EL1) and sends the information to a message hub.
  • The message hub validates the credentials of the port operator and, after successful validation, publishes the received message to all relevant subscribers, including the port authority, port security, and ship broker. Upon receiving the message, the security operator sends personnel to the accident site to evacuate the area and help extinguish the fire. The ship broker contacts the ship and shares important information, such as a list of crew members working near the accident site. The port authority begins monitoring the situation.
  • The port operator raises the emergency level from EL1 to EL2 and forwards this information to the message hub, which then disseminates it to the relevant subscribers. Upon receiving the latest information, all relevant stakeholders take various measures and prepare for the arrival of emergency services by preparing data (e.g., the location of their personnel and vehicles).
  • Rescue Command & Control (R C&C) arrives on the scene and assesses the situation. It receives required information from various stakeholders at the port, providing an up-to-date view of the situational awareness in the port. Assessing the severity of the fire, the R C&C raises the emergency level from EL2 to EL3 and sends this update to the message hub. Upon receiving the information about the emergency level raised, all relevant stakeholders grant required access to data and services to the R C&C.
  • As the fire intensifies, the R C&C evacuates the area within a certain range from the ship by raising the emergency level to EL4. The port authority updates the map with no-go zones. The port authority, along with other parties, starts sharing their people and vehicle location data with the security operator. The security operator assists in the evacuation and monitors the evacuation area via the location data of people and vehicles.
  • Firefighters bring the fire under control, and a wide evacuation is no longer necessary. R C&C lowers the emergency level to EL3. The port authority updates the map accordingly and removes location data access from the security operator. All other parties also remove location data access from the security operator.
  • Firefighters extinguish the fire. Water puddles, debris, and the smell of smoke remain in the area. R C&C lowers the emergency level to EL1. The port authority updates the map accordingly and removes camera and drone access from R C&C. All stakeholders remove location data access from R C&C.
  • After cleanup and ventilation, the port operator determines normal operation can resume and lowers the emergency level to EL0. All stakeholders return to normal operations.
The example port accident case highlights the need for secure data sharing to enable proper situational awareness for the stakeholders according to their role and the emergency level. When examining these requirements from a technical point of view, a set of specific technical preconditions can be identified. These include the need for trustable identities for stakeholders, people, and physical assets (requirement 1, R1); to control the access rights to information (e.g., locations, presence, and camera feeds) on which the situational awareness relies (R2); for granting rights to control certain objects (e.g., drones and camera directions) (R3); for displaying the area and its status on a map or respective visualization (R4); and for the ability to make changes to any of the mentioned access rights at any time during the operation in a controlled manner (R5). When any changes to the access rights are made, the changes are immediately adopted by all stakeholders (R6).

2.3. A Discussion on Prior Art

The discussion on prior art is divided into discussion on technologies related to situational awareness, data sharing, identity management, and digital twin technologies. After that, we overview the prior art in seaport context and summarize the novelty of our contribution.

2.3.1. Situational Awareness

In the information and communication technology domain, attaining situational awareness is a complex process, and all stages—perception, comprehension, and projection—require different sets of tools and technologies. Moreover, careful orchestration of all stages is important to understand the situation in a specific context.

Perception

At the perception stage, raw data are usually gathered from various sensors, systems, and devices [12,13]. When it comes to collecting a multitude of data from various sources, the Internet of Things (IoT) plays a key role [14]. Data are shared between devices and systems using network technologies such as 2G/3G/4G/5G, Wi-Fi, Zigbee, Bluetooth, or radio [14,15]. The selection of network technology depends on factors such as device type, geographical location, and power consumption. In addition to network technologies, application protocols are required to format and prepare data to make them transportable. Commonly used application protocols include Advanced Message Queuing Protocol (AMQP), Constrained Application Protocol (CoAP), Message Queuing Telemetry Transport (MQTT), Extensible Message and Presence Protocol (XMPP), and Representational State Transfer (REST) [14,15]. The next stage involves data handling and processing, which is usually managed by a gateway or a message broker. The message broker ensures that all data packets coming from several sources are handled seamlessly for further processing and persistence.

Comprehension

In the comprehension stage, the data collected at the perception stage are further processed to clean up noise, handle missing data, and apply data fusion techniques to integrate data about the same aspect coming from various sources [12,16]. The processed data become information and create a holistic picture of a situation at a specific point in time and determine the significance of the data collected at the perception stage [13]. The comprehension stage facilitates decision making and performing actions needed according to the current situation [13,16]. Data analysis and processing require a comprehensive platform that can handle a large amount of data. Some commonly used data analytics and processing systems include Apache Spark [17], Apache Flink [18], Apache Storm [19], InfluxDB and Kapacitor [20], Esper CEP engine [21], Azure Stream Analytics [22], and AWS IoT Analytics [23].

Projection

The projection stage includes predicting future events or situations. It involves combining current and past data/information and transforming them into actionable knowledge [13,16]. Artificial intelligence (AI) plays a significant role at this stage. Several AI methods, frameworks, and platforms are available for utilization. Generally, the type of data and prediction scenario are the main driving factors in selecting appropriate AI tools. Some commonly known AI tools are Bayesian networks [24], hidden Markov models (HMM) [25], neural networks, and convolutional neural networks [26]. Moreover, presenting the analyzed data in an easy-to-understand format is equally important for timely decision making. A carefully designed dashboard is an efficient data visualization tool enabling understanding of the results generated by the AI tools [27]. There are several proprietary, freemium, and open-source solutions available to develop state-of-the-art dashboards by implementing advanced visualization techniques, for example, Microsoft Power BI [28], Tableau, Sisense [29], Apache Superset [30], ArcGIS [31], Google Data Studio [32], D3-Data Driven Documents [33], and Apache Echarts [34].

2.3.2. Data Sharing

In the current age of technology, organizations generate and store large amounts of data. Combining data from different sources can offer opportunities for various innovations [4,5]. However, when it comes to data sharing, organizations tend to be reluctant since they would like to maintain control of their data [5].
Electronic data interchange (EDI) is one way that companies and businesses can share data among themselves using a standard format. EDI facilitates business-to-business communication and replaces paper-based documents, such as purchase orders and invoices, with electronic documents. There are two fundamental ways in which data are exchanged between two entites. The first is by using a point-to-point connection between two dedicated machines or computer systems, and the second is by using a value-added network (VAN), which is often provided by a third party. These approaches enable the secure exchange of data between organizations. While EDI provides an efficient way to exchange data between organizations, it is not as flexible as using the REST (representational state transfer) protocol [35,36,37].
REST is used to design and develop RESTful application programming interfaces (APIs), which allow businesses to exchange heterogeneous data with end-user applications or other businesses. The RESTful APIs offer different endpoints that can be invoked to perform data transactions, such as sending new data and updating or deleting existing data. The JSON (Javascript Object Notion) format has become the industry standard for exchanging data. A JSON document is a popular choice because it is not only machine-processeable but also human-readable [38].
In the IoT domain, entities or organizations use different network and application protocols to transmit data among various applications. When it comes to sharing data with other organizations, it is often challenging due to incompatiable data formats and standards. The heterogeneity of data makes it difficult to share data among different organizations and across various domains. In such a scenario, the RESTful APIs facilitate the creation of an abstract layer or a web service that enables interoperability by homogenizing the data formats and allowing services to exchange data efficiently and securely. Thus, RESTful APIs enable the creation of the Web of Things (WoT), where applications and services exchange data in a standardized way without taking into account the underlying technologies that are used to generate data [39].
One web service usually handles a specific business domain, and when there is a need to access multiple web services to accomplish a task, it often becomes difficult to keep track of and document which web service provides what data. In this case, a microservice architecture becomes vital, in which monolithic services and applications are divided into smaller ones [40]. A centralized gateway service handles all incoming requests and redirects them to a specific web service that can provide the requested data or process the information. Thus, client applications only communicate with the gateway service. Although a microservice architecture handles many complexities, it becomes challenging to maintain the security of complex distributed and interconnected services and more security arrangements are required to keep all microservices secured [41,42].
The International Data Spaces Association (IDSA) has developed a reference architecture to facilitate the implementation of dataspaces for sovereign data exchange. Moreover, Gaia-X is another initiative that defines a detailed reference framework for dataspaces. In comparison to IDSA, Gaia-X took a broader approach and emphasizes not only sovereign data exchange but also provides guidelines for data storage on cloud platforms [4]. In these approaches, a dataspace facilitates data sharing among data producers and consumers in a sovereign manner. It provides a secure, decentralized, interoperable, and trustworthy environment for participants to share data and services under predefined rules and policies [4,5,6,7]. A typical dataspace consists of the following core components (Figure 3):
  • Dataspace governance rules and policies [6]: every participant in a dataspace should adhere to those rules and policies.
  • Common schema to describe participants and their offerings (self-descriptions) [6]: this ensures interoperability.
  • Use of verifiable credentials to identify, verify, and authorize participants and their offerings [6,7].
  • Federation services: to provide authentication and authorization solutions, software components for data exchange, logging service, and federated catalog of data offerings available from dataspace participants [7].
  • Trust anchors [6,7]: As a part of dataspace governance, members jointly agree on a set of external authorities that are considered trustworthy. Thus, only those participants, data, or services which are recognized by one of the defined trust anchors are allowed to become part of a dataspace.
The provider and consumer share data in a peer-to-peer manner under the governance of the dataspace ecosystem. Figure 4 depicts a typical data-sharing scenario. The provider can define a data usage policy (which defines obligations and prohibitions), and the consumer can define a data search policy (e.g., search only specific types of data provided by specific providers). The connectors at both ends are responsible for enabling actual data sharing and ensuring automated enforcement of policies. The logging functionality, which is usually provided by the federation service records, notes what is being shared, with whom, and under which conditions. The logging records can be used for conflict resolution between the data producer and consumer.

2.3.3. Identity Management Technologies

A flexible and secure verifiable credentials service is a vital component to ensure sovereign data exchange and protect the identities of stakeholders, services, and service providers. Therefore, a comparison was conducted to find out which identity infrastructure suited our scenario best. A review and comparison of different identity infrastructures are shown in the Table 1 and Table 2. The comparison focused on the technical viewpoint, comparing, for example, programming language support and a wide range of features. After the comparison, we found that the following two criteria are the most important ones considering our case:
  • Functionality to issue and verify credentials not only manually by using a web application but also in an automated manner by using web APIs.
  • Functionality to store issued credentials in a wallet. The wallet can be accessed not only by some applications but also through web APIs. Therefore, application services can store and present credentials without any human intervention enabling independent service-to-service communication.
At first, we explored the offerings of the Microsoft Entra Verified ID Service, available in the Microsoft Azure ecosystem as a cloud service. The service meets the general criteria for issuing, verifying, presenting, and storing verifiable credentials. However, it is focused on meeting the needs of end users and does not have web APIs for tasks related to verifiable credentials. Since the service cannot be used in an automated manner and always requires manual human interaction, the Microsoft Entra Verified ID was not selected for use in our prototype [43].
The second option we explored was an open-source solution called walt.id. Although this service meets our two main criteria, it requires setting up a server with specific installations, configurations, and databases necessary for the walt.id service. We decided not to proceed with this solution due to the complexities involved in setting up a dedicated server environment for the walt.id, which was not possible within the limited timeframe we had to develop a full prototype [44,45].
Finally, we evaluated the Trinsic platform, which meets our criteria in every aspect and is also well matched with our development environment. The Trinsic ecosystem provides software development kits (SDKs) for various development environments, from which the Trinsic NET SDK was selected. Although, Trinsic is a paid cloud service, they offer a free plan that allows you to create an ecosystem with ten wallets. Since our prototype does not require hundreds of wallets, we selected the Trinsic free plan to develop the verifiable credentials service and integrated it into our prototype [46].
In their conference paper “A Study of Lightweight DDDAS Architecture for Real-Time Public Safety Applications Through Hybrid Simulation”, Blasch et al. (2019) discussed implementing a blockchain-enabled security service to maintain secure identity management and secured data access and sharing [47]. This service uses a virtual ID (VID) for each entity on the blockchain, with smart contracts facilitating identity authentication and access control policies [47]. Their approach emphasizes a permissioned blockchain network to ensure higher efficiency and security by limiting participants and clearly defining security policies.
Another solution to identity management in a similar context to our case was introduced by Taruly Saragih et al. (2019) in their conference paper “The Use of Blockchain for Digital Identity Management in Healthcare” [48]. Their approach to securing identity management and access control in such a critical environment also uses blockchain solutions as a foundation but harnesses the power of self-sovereign identity (SSI) [48] to make the identity management process more efficient.
Even though Trinsic as a solution is also blockchain-based, our implementation of secure identity management and secured data access and sharing for a hybrid simulation is different. Trinsic as an ecosystem enables the usage of verifiable credentials (VCs), wallets, and decentralized identifiers (DIDs) [46] for a more future-proof, universal, and standardized approach to secure identity management and access control. Unlike the method described by Blasch et al., which relies on separate microservices for security tasks [47], Trinsic’s method integrates identity management and data sharing more seamlessly. Though Trinsic also follows SSI principles, it manages all the needed identity management tools in a singular ecosystem, thus making the process more efficient, cost-efficient, and manageable. This makes our approach different in terms of previous identity management solutions in hybrid simulation scenarios or in similar vital environments.

2.3.4. Digital Twin Technologies

Modern digital twin (DT) solutions have become important tools in various industries, offering a sophisticated approach to understanding, monitoring, and optimizing physical entities and processes. These solutions leverage real-time data from sensors and IoT devices to create virtual replicas that closely mirror their real-world counterparts. One significant benefit lies in their ability to simulate complex scenarios within the virtual space, allowing organizations to test and refine processes without real-world risks. This capability is particularly impactful in manufacturing, healthcare, smart cities, aerospace, and other sectors.
One of the advantages of modern digital twin solutions is their contribution to informed decision making. By providing a detailed representation of physical systems, organizations can gain insights into performance, efficiency, and potential challenges. This informed decision making extends across various stages, from the initial design phase to ongoing operations and maintenance. Additionally, digital twins enable a proactive approach to problem solving by predicting issues before they occur, reducing downtime, improving reliability, and optimizing resource utilization.
Digital twins are also useful for improving operational efficiency. Organizations can use virtual replicas to identify bottlenecks, streamline processes, and enhance overall productivity. In manufacturing, for example, digital twins support predictive maintenance, ensuring that equipment is serviced precisely when needed, minimizing disruptions, and extending the lifespan of machinery. This efficiency extends to other sectors as well, contributing to cost savings and resource optimization.
Another advantage of modern digital twin solutions is their role in fostering innovation. By providing a platform for continuous testing and refinement, organizations can experiment with new ideas and technologies in a controlled environment. This accelerates the innovation cycle and encourages a culture of continuous improvement. In the context of smart cities, for instance, digital twins allow urban planners to explore and implement innovative solutions for infrastructure, transportation, and energy management.
Overall, modern digital twin solutions empower organizations to adapt to an increasingly complex and dynamic world. From improving decision making and operational efficiency to fostering innovation, the benefits are diverse and far-reaching. As technology continues to advance, the potential applications of digital twins are likely to expand, contributing to increased sustainability, resilience, and competitiveness across various industries.
When discussing realizations of the digital twins, for example, 3D-type models have been designedto help visualize the real world [49]. In this research, we utilized the Unity platform, because it offers a robust platform for efficient simulation development in a 3D way. However, users typically would also benefit from having traditional two-dimensional supervisory views, e.g., to location of objects, movements, and data. For example, a dashboard to support urban mobility management and decision making has been developed [50]. The dashboard presents diverse spatial data by combining data collected by IoT sensors and external data services. The dashboard frontend was implemented with TypeScript-based Angular framework, and the backend was implemented with Java Spring framework, PostgreSQL database with PostGIS extension, and GeoServer for geospatial data processes. In our work, the dashboard is intended to provide situational awareness of the port area by presenting location data of entities in the area; thus, there is no need for such complex spatial data processing. Another example has been described in [51], where a dashboard for supporting communications between emergency first response personnel in real time across multiple incident scenes. The frontend is implemented by using Bootstrap framework and HTML 5, and the backend is implemented with JavaScript, MySQL database, AJAX, PHP, and jQuery. We applied the same type of approach and have a real-time updated map with markers and information linked to them. It functions as a web-based application for monitoring and visualization of cooperative missions of unmanned vehicles named Navigator [52]. The application combines and presents environmental, geographical, and navigational data needed for autonomous fleet monitoring during for research purposes.

2.3.5. Prior Art in Seaports Context

Seaports are rapidly evolving into smart ports [9], with cyber–physical systems (CPSs) playing a crucial role in automating seaports by linking physical devices with digital systems [10]. This transformation introduces new challenges, including the coordination of information sharing within the complex, multi-stakeholder port environment, especially during accidents [53,54]. Various barriers impede information sharing, such as organizational silos, privacy issues, security concerns, trust, system complexity, cost, data quality, and data sovereignty [8,9,10,53,54,55].
The concept of dataspace is emerging as a solution to these challenges in the multi-stakeholder environment. Rødseth and Berre [8] applied the principles of the industrial dataspace (IDS) reference architecture to create a maritime dataspace for exchanging digital ship data among stakeholders. Similarly, Sarabia-Jácome et al. [9,10] utilized IDS to develop a seaport dataspace addressing interoperability and data sovereignty issues. Olesen et al. [55] suggested that a specialized shared ICT system could help mitigate collaboration issues within the port environment. While the literature provides examples of dataspaces improving information sharing in multi-stakeholder settings, there are no known examples (to the authors’ knowledge) of dataspaces used for both daily operations and information sharing during accidents. This research aims to bridge this gap by developing a prototype demonstrating how dataspaces can facilitate data sharing in routine operations and enable online information sharing during accidents.

3. Simulation-Based Solutions

3.1. Securing Data Sharing for Situational Awareness

A view of securing data sharing for situational awareness is depicted in Figure 5. Each SP has their own view of situational awareness in which perception is based on their self-managed physical assets and secure data sharing between them. The management of physical assets is a specific critical capability, and secure data sharing within a single SP system is assumed here to use state-of-the-art security technologies. The visualization of the situational awareness relying on the physical assets and data of the SP A system is based here on the 3D digital twin technologies.
An essential need detected in the port accident case is the requirement for secure data sharing between the stakeholders, as depicted in the middle of Figure 5. The selected approach for this is based on the dataspace concept applied in the Gaia-X/IDS forums. In addition, identity management is based on the use of verifiable credentials and wallet applications. After analyzing various identity infrastructures, we selected the Trinsic ecosystem as the basis of our identity management solution.

3.2. Dataspaces for Secure Data Sharing

The selected approach for the dataspace concept is based on the IDSA and Gaia-X technologies, which are applied here to fullfill the data-sharing needs highlighted by the port accident case. The core architecture of the applied dataspace concept is depicted in Figure 6. The main stakeholders of the port accident case are visualized as rounded rectangles around the dataspace federation services box in the middle. Each stakeholder has a separate service system, which consist of data assets, a web REST API (representational state transfer, application programming interface), a verifiable credentials service, and a web application.
The data assets refer to the available data, images, videos, and files that can be shared with other stakeholders. The web REST API acts as the connector for exchanging data with the other stakeholders. The verifiable credentials service offers means to issue, receive, and store credentials. The web application enables managing the local data catalog of the stakeholder. The dataspace federation service is the core component enabling the operation of the dataspace. It consists of a dataspace membership registry, a verifiable credentials service, a compliance service, a federated data catalog service, and a message broker service.
The dataspace membership registry and the verifiable credentials service take care of memberships in a reliable manner. The compliance service checks and validates the compliance of members and their offerings against dataspace rules and policies. The federated data catalog service manages the federation of data allowed for the members of the dataspace. The message broker is used for sharing data between the stakeholders.

3.2.1. Dataspace Membership

Any entity (company, organization, or institution) that wishes to become a member of the dataspace sends a membership request to the registry service. The membership request is formatted according to the dataspace schema and rules, which contain information about the entity. The registry service forwards the membership request to the compliance service, which performs various validation checks, such as requesting schema validation, checking stakeholder eligibility, and verifying if one of the dataspace trust anchor authorities recognizes the stakeholder (this functionality is not implemented in our prototype). Once compliance is checked, the registry service forwards the entity information to the verifiable credentials service, which generates the dataspace membership credentials for the entity. The newly generated credentials are then forwarded to the entity by the registry service. The entity stores the credentials in their own wallet for future communication. One entity can be a member of multiple dataspaces. Therefore, after successful registration, the entity also generates verifiable credentials for the dataspace, which are used by the dataspace services to communicate with applications and services offered by the stakeholder (Figure 7).
The verifiable credentials service was implemented by using Trinsic’s ecosystem. Therefore, the Trinsic .Net SDK (Software Development Kit—version # 1.11.8) is utilized to integrate the Trinsic API into the participants’ technical ecosystem. The SDK provides the necessary functions to implement the issuance and verification service for the verifiable credentials. Moreover, it allows storing credentials in a web wallet provided by Trinsic. In addition to the Trinsic service, we also used JSON Web Tokens (JWTs). The Microsoft ASP.Net web API has built-in support for JWTs making it easier to secure the API. Once the verifiable credentials are verified using the Trinsic service, the web REST API generates a JWT by embedding all the roles, scopes, and policies into it. This approach allows us to automate the enforcement of, for example, data access and usage policies.

3.2.2. Data Offering

A dataspace participant can have the role of data provider, consumer, or both. To offer data to other participants within a dataspace, the data provider creates a self-description of the data to be shared. The self-description defines what is being shared, in what format, how it can be accessed, and under what restrictions and obligations. Afterwards, the data provider implements a data access API endpoint and configures data access and usage policies. When data are ready to be shared with other participants in the dataspace, the data provider publishes the data offering in their own local catalog. The federated dataspace catalog service crawls the local catalog of every dataspace participant and generates a consolidated view of the catalog, which contains offerings from all the dataspace participants. To read the local catalog of the participant, the federated catalog service uses the credentials issued by the participant to the dataspace during the registration (Figure 8).

3.2.3. Data Contract between the Provider and the Consumer

Participants can browse through the federated dataspace catalog to find some data or service that could be beneficial for them. When a suitable data offering is found, the interested data consumer confirms the data provider’s identity by sending a request to the dataspace registry service. Upon successful identification and validation, the data consumer sends a data contract request to the data provider. Upon receiving a new data contract request, the data provider validates the data consumer’s identity by sending a request to the dataspace registry service. After validating the data consumer’s identity, the data provider generates verifiable credentials for data access and sends them to the requesting data consumer. The data consumer stores the data access credentials in their own wallet. The data access credentials allow the consumer to access only the data for which the request was received. Therefore, if the same the consumer wishes to access some other data provided by the same data provider, they must make a new data contract, and the data provider will issue new data access credentials specific to the new data contract (Figure 9).
The dataspace registry service allows the registration of new dataspace participants. The service comprises the Microsoft ASP.Net Web REST API and a membership database (implemented using PostgreSQL). Any interested stakeholder can send a request to the dataspace registry service by using the REST API endpoint of the dataspace registry service. Upon successful validation of the stakeholder, the registry service uses the verifiable credentials service to issue dataspace membership credentials to the stakeholder and store the membership record in its own database. A centralized dataspace registry is selected because it is easy to implement and manage. Alternatively, a distributed registry database can be implemented by using blockchain technology, where each dataspace participant is responsible for maintaining the registration information.

3.2.4. Data Exchange

Data exchange between the provider and consumer happens peer to peer, according to the data contract policies. The consumer presents verifiable credentials (issued by the provider) to the provider. The provider verifies the credentials and allows data access to the consumer. After successful authentication and authorization, the consumer sends a data access request. Upon receiving the request, the provider checks the data access and usage policies to determine if the consumer still meets the criteria to access the requested data. If everything is fine, the requested data are transferred to the consumer, and the data access transaction is logged by the dataspace logging service. When the consumer receives the requested data, it also logs the transaction with the dataspace logging service (Figure 10).

3.2.5. Message Broker Hub

A message broker hub is developed at the dataspace level. The hub is used by the participants to exchange real-time data with the dataspace dashboard and with other dataspace participants. Microsoft SignalR is used to implement the message broker hub. It is selected for two main reasons: first, it aligns well with other modules and applications of the dataspace, which are also developed with Microsoft technologies; and second, it allows implementation of authentication and authorization at a very granular level. Thus, it provides a very secure real-time data exchange within the dataspace. RabbitMQ was also evaluated during the initial phase for implementing the message broker hub. However, it turns out that it is difficult to deploy and provides fewer opportunities to customize the authentication and authorization functionality according to our needs.
A message broker client is implemented for each dataspace participant independently. The client enables communication between the participant and the message broker hub of the dataspace. For streamlined communication, the message broker client was also implemented by using the client libraries of Microsoft SignalR.

3.2.6. Federated Catalog

The federated dataspace catalog service comprises a Microsoft ASP.NET (6.0) Web REST API and a web application that is developed by using HTML5, CSS, and JavaScript. The federated catalog also has its own PostgreSQL database, which stores a snapshot of the data catalog entries of the dataspace participants. The dataspace catalog service synchronizes the central database with all the dataspace participants once a day. However, to keep the federated catalog up to date, the dataspace participants send a message to the dataspace message broker hub whenever they update their local data catalog. The message broker hub forwards the message to the federated catalog service, which then reads the local catalog of the participant and resynchronizes its database instantly.
For every dataspace participant, the following components were developed:
  • A web REST API to exchange data within the dataspace;
  • Database to store participant-specific data;
  • Verifiable credentials service;
  • A web application for managing the local data catalog of the participant;
  • Message broker client to interact with the dataspace message broker hub.

Web REST API

The Microsoft ASP.NET web API is selected to implement an independent web REST API application for each dataspace participant. It enables the development of robust and secure web APIs, which can be implemented quickly due to numerous built-in functionalities and features. The web API service lies at the core of the technical ecosystem of any dataspace participant serving as a bridge between the data layer and end-user applications. Moreover, it facilitates a secure data exchange with other dataspace participants and the core services of the dataspace.

Database

Each dataspace participant has their own database to store their data in a secure manner. The open-source PostgreSQL is selected to implement the data layer for all participants. It facilitates the development of an advanced relational database and ensures high data consistency and integrity.

Data Catalog Application

A separate web application was developed for each dataspace participant to manage their data catalog. The application enables users to view the catalogue entries (Figure 11) and add new ones through the provided form (Figure 12). The application is developed using simple technologies, e.g., HTML, CSS, and JavaScript. These technologies are selected because they do not require any special configurations to be deployed on a web server. The application can be deployed on any web server.

3.3. Digital Twin for Visualization

The visualization of the situational awareness in the port accident case is achieved using a 3D virtual model and an external supervisory dashboard for assistance. These models aim to capture the dynamic state of the physical assets, environment, and data as a functional virtual representation of the port environment referred to here as the digital twin for situational awareness.
The operation of the port accident case is simulated so that the digital twin serves as the main mean of depicting the movement of physical objects and related data in real time as accurately as possible during the execution of the emergency case. The assets and dataspaces of each stakeholder could subsequently be integrated with the digital twin, allowing real-time updates to be seamlessly incorporated to provide a comprehensive visualization of the situational awareness.

3.3.1. Three-Dimensional Model of the Port Environment

The 3D model of the port environment was developed with Unity, which offers a robust platform for efficient 3D simulation development [49]. For this project, a 3D replica of the Oulu Port area was created for simulation use.
The basis for the virtual environment was created by utilizing a photogrammetry mesh acquired from Google Maps with the use of RenderDoc (version 1.25), an open-source graphics debugging software (Figure 13). The software was utilized for capturing a single frame of the 3D-rendered map view from a web browser, resulting in an RDC file containing the mesh and texture data of the desired model. The captured RenderDoc file was subsequently imported into Blender, an open-source tool for rendering, 3D modeling, and animation, with the use of a third-party add-on MapsModelsImporter. Once imported into Blender, the mesh was subjected to a number of improvements, such as reducing the number of nodes, edges, and faces in the model, aimed at reducing the complexity of the model for increased performance. After these improvements, the mesh was baked—texture baking being the process of transferring texture data from one 3D model onto another with the captured texture before ultimately being exported into the Unity project as an OBJ file. The result was a crude but functional mesh capable of serving as the environment for the intended simulation purpose (Figure 14).

3.3.2. The Unity Simulation Environment

With our virtual environment imported into Unity, we are able to take advantage of the engine’s scripting capabilities and add the necessary functionalities for executing the desired scenario. The engine also provides us with access to configurable physics and lighting, although these features did not see extensive utilization in the confines of this project.
The basic entities of the simulated emergency scenario were set as “simulation objects.” These entities include vehicles, people, buildings, and storage units one can assume to be present in the port area. An object is defined as a simulation object with the attachment of a “Sim Object Data” C# script. This script acts as a simple data container for that specific object (Figure 15). The data can be set within the simulator side in the Unity editor before the simulation is executed or fetched from outside from the relevant stakeholder within the dataspace when starting the simulation. For a more functional real-time simulation in the final iteration of the simulation, the latter approach is more preferrable. Each simulation object operates under one of the five defined port-related stakeholders, and their data are only accessible by the same stakeholder until certain conditions—both inside or outside the Unity model—are met.
The stakeholders of the port accident case were primarily assumed to exist outside of the Unity model, communicate with the simulation environment, and control the data access of different simulation objects, namely, the intended end users of the simulator. The majority of the stakeholders’ functionality was simulated inside the Unity editor in the form of a Unity game object with a corresponding script attached, for example, simulated emergency services stakeholder inside Unity showing the data received by the emergency services from other stakeholders at emergency level 4 (Figure 16).
The flow of the port accident case is controlled by a manager script (Figure 17). It manages the events that take place within the game scene during runtime and calls other scripts to update various visual elements in the scene and the user interface. In addition, it controls the emergency level (EL) and communicates with the stakeholders whenever the EL changes, prompting appropriate data access to be granted. The manager script is also responsible for broadcasting the emergency level change to the dashboard and updating the Unity-side user interface. The EL is represented by an integer, which indicates the severity of the ongoing emergency and is shared by all entities within the dataspace.
Another important component of the simulation is the data manager. The data manager of the simulation model is responsible for fetching and updating the data for simulation objects from the backend (Figure 18). In addition, it handles converting object position from UGC (Unity game coordinates, XYZ) to real-life coordinates (longitude and latitude) for the dashboard.
Perhaps the most important element of the simulation model is the player. The player entity is a controllable, dynamic simulation object that is operated by one of the stakeholders. The player character is able to move around and interact with certain elements of the digital twin model. The player object is intended to be duplicable and have multiple instances of it present in a simulation, each of them controlled by a human user. The player’s user interface, as shown in Figure 19, aims to provide the end user with accurate and relevant information regarding the game state and the various changes that occur in both the digital environment and the dataspace during a simulation. The contents of the interface view differ from player to player depending on which stakeholder is the controlling entity of that particular player’s character. The functionalities of the player, or a group of players operating under the same stakeholder, would be at least partially defined by their stakeholder, and they could be restricted depending on the data access between stakeholders granted with the dataspaces. The player prototype was created using the Unity starter assets third-person character controller package.

3.3.3. Solutions for Supervisory Dashboard

The simulation of the port accident case requires a view for the simulation supervisor, who can see the presence and locations of entities, and data by monitoring the overall system. The solutions for such a supervisory dashboard are described briefly in this section.
For the purpose described above we ended up implementing a web application. They are cross-platform solutions which are relatively simple to optimize for both desktop and mobile devices.
For this work, we implemented the web application with the .NET ASP.NET web application framework using the Blazor hosting model version 6. The Blazor server was selected for its fast execution speed and small payload sizes. The disadvantage of this hosting model is that it has no offline mode. However, the offline mode is not required for this application.
A sample view of the supervisor landing page is shown in Figure 20. This landing page represents various data and an overview of the events within the application. The two upper boxes display the current date and time, emergency level, and weather forecast by fetching data from the OpenWeather API. If there are pending data requests or messages, they are presented in the two lower boxes.
The supervisory view of the Port of Oulu is created with GeoBlazor component library developed by Dymaptic. The map view consists of the base map and map marker layers. The map view can be zoomed in and out and navigated in all directions. Map markers can be toggled on and off by category. OpenStreetMap (OpenStreetMap Foundation, Cambridge, UK)) serves as the base map layer for this application. The simulation sends the presence and location data of the simulation objects to the supervisory dashboard application. The simulation objects are added to the respective map marker layers and drawn on the map. The locations of the markers are updated in real time when the simulator sends the updated data to the supervisory dashboard (Figure 21). Clicking a map marker opens a pop-up presenting the data linked to the marker (Figure 22).

4. Evaluation Results

4.1. Simulation Set-Up for the Evaluation

A view of the simulation set-up developed for validation is depicted in Figure 23. To simulate the port accident scenario (see Section 2.2), a dataspace is developed by implementing simulated systems for the five core stakeholders, which are port authority, port operator, port security, ship broker, and emergency services—rescue command and control (see Section 3.2). Moreover, dataspace federation services are implemented to handle identity service, member registry service, data catalog, and message broker hub for sharing data among the stakeholders and digital twin services.

4.2. Evaluation of the Dataspace Solution

Multiple applications were developed in order to create the prototype of a dataspace by considering the main stakeholders according to the port accident case. All core components were developed using the Microsoft C# programming language and .Net 6 framework. The combination of technologies enabled the development of a robust, fast, and secure application. Moreover, open-source solution PostgreSQL was selected to facilitate the development of an advanced relational database with high data consistency and integrity. All selected technologies proved to fulfil the aimed purposes.
The realized dataspace federation service is the dataspace membership registry, a verifiable credentials service, a compliance service, a federated data catalog service, and a message broker service. No logging service was implemented because of time constraints.
The dataspaces solution developed for realization of the data sharing between stakeholders proved to work quite well. However, the stakeholders and their data shared were simulated and no real service systems were included. In addition, any essential simulations for the situational awareness processes of the stakeholders were not possible to realize because of the time constraints. In addition, the analysis of security of the information processing within the dataspace is still mostly an issue for future research.
The solutions for identity management were created relying on the use of verifiable credentials and wallet applications. After the analysis of various identity infrastructures, Trinsic technology was selected as the basis of the solution. The solution enabled the use of digital wallets and verifiable credentials within the dataspace, making it possible for us to create a secure and efficient environment for data sharing and a good foundation for secure identity management. Configuring the Trinsic ecosystem gave us possibilities to authorize users to participate in the dataspace and use specific actions using verifiable credentials. One key advantage of Trinsic is its user-centric approach, allowing individuals to manage and control the information shared with the dataspace themselves. This ensures a personalized and privacy-focused experience for our users. Configuring the Trinsic ecosystem proved to be an efficient solution for our dataspace implementation. Using verifiable credentials as the authorization method provided the necessary flexibility to have efficient control over user access as well as controlling the usage of specific actions inside the dataspace.
As we recognized the need for an authorization mechanism across multiple applications within the dataspace, we integrated OpenID Connect (OIDC) as the preferred method for both logging in and authorizing users as Trinsic provided the needed support for the implementation. This integration enhances efficiency of the authorization and adds a lot to the secure user experience. OIDC securely manages logins of the users and their authorization to the applications which ensures a seamless and secure authorization process across all the applications. A sample dataspace membership-related verifiable credential within the OIDC log in endpoint is depicted in Figure 24.

4.3. Evaluation of the Digital Twin Solutions

Unity was chosen as the development platform of our simulation environment due to its versatility and capability to provide all essential features required in the development of the scenario. In addition, it also has an extensive set of library assets available in the Unity Asset Store and this proved to be very valuable during the prototyping process. The use of these packages allowed for time savings and added further functionality into the simulation environment with small efforts. For example, controlling the player camera or import a fully functional third-person character controller from the Unity registry is enabled. In addition, by choosing Unity, we were able to access Unity Game Services which were a great help during the development process, namely, access to Unity-compatible version control and a built-in networking solution for prototyping a possible multiplayer experience.
In the end, the virtual simulation environment, despite its relatively basic features, effectively served as a viable proof of concept within the confines of our project’s time and resource limitations. While the result appeared minimalistic regarding the fidelity of the environment and user experience, it satisfactorily demonstrated the core functionalities and feasibility of the envisioned concept. Despite its barebones nature, the simulation reliably executed its intended tasks of altering data access between stakeholders according to the emergency level and accurately sharing location data with the dashboard, highlighting the fundamental mechanics and interactions envisioned for the final product. Considering the constraints faced during development, the simulation’s ability to function as intended underscored its success as a foundational prototype, validating the concept’s viability and serving as a springboard for potential future development.
Although the current iteration of the environment operates independently from the broader intended dataspace and the stakeholders within, integration of the simulator with the dedicated dashboard application was also tested during development. This was performed to establish the potential for seamless integration between the simulation environment and the wider, overarching dataspace. Despite their initial disconnect, the successful connection with the dashboard showcased the future possibilities for a cohesive ecosystem, where real-time data and simulations could converge, fostering a more comprehensive and interconnected system in line with our project objectives.
During the earlier stages of development, testing of possible multiplayer functionalities was also successfully conducted, demonstrating the capacity for multiple clients to interact within the simulation environment. This was performed by using Unity’s built in NetcodeForGameObjectspackage in connection with Unity Game Services infrastructure. However, for the sake of expediency and simplification in the final phases, the solution underwent testing focused solely on a single client. This decision allowed for a more streamlined evaluation process, ensuring that core functionalities and the essential mechanics of the simulation remained stable and functional. While the final testing concentrated on a singular client set-up, the successful earlier tests with multiple clients provided confidence in the scalability and potential for future expansion toward a multiplayer-oriented environment.
Overall, we feel that the simulation environment opens intriguing possibilities for training rescue personnel. Its integration with an extensive dataspace could allow for both fully virtual and hybrid-style exercises, offering a spectrum of training scenarios for rescue workers and overall crisis control. The platform’s potential lies in its capacity to simulate realistic situations, providing a controlled space for skill development and decision-making.
The supervisory dashboard solution was implemented using ASP.NET technology. The creation of a new project with it was fast and effortless, and it was easy to start building a new web app on top of the sample application. In this framework, web pages are created with HTML and CSS, and scripts are written in C#. Different ways to import data from the simulation into the dashboard were prototyped. In one prototype the dashboard application had a database in which the simulation added objects and their data via an API. The dashboard then read the data from the database and drew items on the map. There were problems with reading the updated data from the database in real time, so we looked for a different solution. We ended up sending the data from the simulation to the dashboard with SignalR. It is part of the ASP.NET framework, so no third-party libraries were required, and the implementation was straightforward. SignalR enabled real-time data updates from the simulation to the dashboard, which is an essential feature in this kind of application. GeoBlazor (version 2.3.3) was a perfect solution for the map creation needs in this project. GeoBlazor has a free version and a paid premium version (currently in beta), but the free version had all the features needed in this application. It was easy to adapt as the documentation and samples were excellent. GeoBlazor is still under active development and contains some faults and bugs. Luckily, no application-breaking bugs were encountered during the development. Logging in the dashboard was implemented using Serilog (version 3.4.1). It is a feature-rich, open-source logging library for applications written in C#. During this work, logging did not play a significant role and was mostly used for debugging.
In the dashboard, points of further development are improvements and optimization in data transfer from the simulation into the dashboard. More data can be gathered from a variety of sources (sensors, drones, external services, etc.) and presented in an appropriate way. If the amount of data grows and requires more processing before visualization, new backend services can also be introduced. Logging can also be improved and used for security monitoring. For example, the logger could monitor what data are accessed or requested by which user or IP address.

4.4. Evaluation against the Challenges of the Port Accident Case

The targeted port accident case is related to the situation where a forklift catches fire while loading a docked ship. The case highlights the need for secure information sharing to enable proper situational awareness for the stakeholders according to their role and the emergency level. When looking at these needs from a technical point of view, a set of specific technical requirements was detected and described in Section 2.2. Let us discuss the achieved results against these identified challenges and needs as follows:
  • R1—Need to have trustworthy identities for stakeholders, people, and physical assets: The provided solution applies Trinsic and OpenID Connect technology for creating a trustworthy basis for identification. Trinsic enabled the use of digital wallets and verifiable credentials and OpenID Connect for logging in and authorizing users. Based on the evaluation, these technologies proved to provide a good basis for enabling secure identity management.
  • R2—Need to control the access rights to information (e.g., locations, presence, and camera feeds) on which the situational awareness relies: In the provided solution, the access rights are always controlled by the participating entity. First, the entity registers into the dataspace to obtain membership. Then, the data to be shared are offered in the federated dataspace catalog. Finally, the data contract between the provider and consumer is concluded. These steps must be taken before a data exchange can be started, and access credentials will be checked before the actual data exchange can happen.
  • R3—Need to give rights to control some objects (e.g., drones and camera directions): Some provided solutions related to information sharing could also be applied for help in giving rights to control some objects. However, such situations were not specifically validated as part of this work.
  • R4—Need to present the area and its status on a map or respective visualization: The Unity-based digital twin and solutions for supervisory dashboard can be applied for showing the port area and its status on a map.
  • R5—Need to make changes to any of the referred access rights at any time during the operation in a controlled manner: The data contract must always be concluded before any data sharing can happen. In data sharing, the provider checks data access and usage policies and determines if the consumer still meets the criteria to access the requested data. If everything is fine, the requested data are transferred to the consumer. Otherwise, no data are delivered. If there are changes in the access rights, the data are not delivered to the consumer even if requested.
  • R6—When any changes to the access rights are made, the changes will be immediately implemented by all the stakeholders: The changes can be made at any time as described in the R5 evaluation. However, if the data have been delivered beforehand, use of the data cannot be prevented.

4.5. A Discussion

We started by defining the challenges and requirements of a port accident case and studied the critical port system by using a simulation-based approach. The developed simulation system consisted of the main stakeholders that need to be active in the port especially during an accident: port authority, port operator, port security, ship broker, and emergency services—rescue command and control.
The realized solutions considered dataspace federation services to handle member registrations, identities and verifiable credentials, data catalog, message broker for sharing data among the stakeholders, and digital twin services for situational awareness. The provided dataspace solution developed for realization of secure data sharing between stakeholders to enable situational awareness worked quite well. However, the stakeholders and their shared data were simulated, without the need for any real service systems. Verifiable credentials as the authorization method provided the necessary flexibility to have efficient control over user access as well as controlling the usage of specific actions inside the dataspace. The integration of OpenID Connect enhanced the efficiency of the authorization process and added a lot to the secure user experience.
The application of the Unity 3D virtual model of the port proved to be a viable base for visualizing the situation for a user. The minimalistic port model was eventually successfully connected to the geolocating dashboard, the dataspace, and the sharing services. In addition, the possibility for multiplayer capabilities was also tested, demonstrating the capacity for multiple clients to interact within the simulation environment. Based on these evaluations, it is estimated that the provided simulation system opens possibilities for building a system for training rescue personnel to be able to act properly in port accident cases. However, tests with end users have not been carried out, and therefore the evaluation of the quality of the end user experience is still a topic of future research.
The port accident case sets several requirements toward the applied technologies. Based on the evaluation, the Trinsic ecosystem together with OpenID Connect proved to be a good foundation for secure identity and verifiable credentials management. The integration of these technologies with dataspace solutions facilitated the management of access rights to information. This ensured reliable data sharing among entities belonging to various stakeholders. It was also observed that the Unity-based digital twin with geolocating dashboard established a good basis for situational awareness in the port area for multiple players.
The system has not yet been evaluated in a real-world scenario involving actual end users. However, it has been designed to easily integrate with the existing systems of port stakeholders. Each stakeholder is responsible for managing their own database, meaning that data availability relies on the individual stakeholders. If one stakeholder’s system experiences failure, it does not impact other stakeholders’ ability to share data. For real-time data sharing, the message broker hub ensures that data generated by stakeholders are almost immediately available to others. The hub also monitors the systems connected to it and sends notifications if a stakeholder is unable to send or receive data, informing others about the unavailability of specific data so they can adapt accordingly.
The federated data catalog within the dataspace enhances data accuracy and integrity by requiring stakeholders to provide complete details about the data they offer, such as the data format and field definitions. When data are shared, stakeholders must adhere to the agreed data format established in the data contract.
The system minimizes manual intervention by automating many processes. For instance, once a stakeholder submits the necessary information for dataspace membership, the system automatically conducts verification checks, issues verifiable credentials, and records the membership in the registry. Similarly, when a stakeholder requests a data contract with another stakeholder, the system handles the entire process after the initial request is made through the federated data catalog.
The simulation-based approach worked quite well, in that way it was possible to conduct the study despite the complexity, cost, and safety issues. It was possible to execute the accident case where a forklift catches fire while loading a docked ship in a port environment in a controlled manner. The secure data sharing between the simulated port stakeholders’ systems and showing the situation in the 3D digital twin were made possible and shown in a limited context. The limitations were related to the realization of the dataspace solution, Unity-based digital twin, and supervisory dashboard so that only the basic capabilities were realized. In addition, data interactions were simplified so that only the basic concept was tested. Therefore, essential needs for future R&D were also identified. These findings are related to perception, comprehension, projection, trust, and security and enriching the viewpoints toward the situational awareness of people, physical assets, and digital twins of the participating CPS systems.

5. Concluding Remarks

The port environment is inherently a complex context consisting of multiple stakeholders, people, and physical CPS assets operating in the area. The complexity escalates during an accident, as the rescue command and control must operate efficiently. In this research, the simulation-based approach was selected as the basis for the study to handle the complexity efficiently and to solve the cost and safety challenges.
During the work, solutions related to dataspaces for secure data sharing and visualization of situational awareness were developed. The secure data-sharing solution relies on the application of verifiable credentials (VCs) to ensure that data consumers have the required access rights to the data/information shared by the data providers. A three-dimensional virtual digital twin model is applied for visualization of the situational awareness for people in the port.
The solutions were evaluated in a simulation-based execution of an accident scenario where a forklift catches fire while loading a docked ship in a port environment. The secure data sharing between the systems of simulated port stakeholders as well as the representation of the situation in the 3D digital twin were successfully achieved, though in a limited manner. The limitations were related to the realizations such that only fundamental capabilities were realized, and data interactions were simplified. However, the fundamental capabilities provided enough insight, and the concept was evaluated. The simulation-based approach and the solutions demonstrated their practicality, enabling a smooth study of a disaster-like scenario. Thus, we successfully applied the dataspaces for both daily routine operations and information sharing during accidents in a simulation-based environment.
During the evaluation, needs for future research related to perception, comprehension, projection, trust, and security as well as performance and quality of experience were detected. Especially, distributed and secure viewpoints of objects and stakeholders toward real-time situational awareness seem to require further studies.

Author Contributions

J.L. acted as the main editor and contributed to all chapters of this article in one way or another. A.U. contributed to Section 1, Section 2.3, Section 3.2 and Section 4; T.N. contributed to Section 2.3, Section 3.3 and Section 4.3; J.T. contributed to Section 3.3; and A.T. contributed to Section 2.3 and Section 4. All authors have read and agreed to the published version of the manuscript.

Funding

This article is based on the research actions executed within VTT GG SitAw project.

Data Availability Statement

Data is contained within the article.

Acknowledgments

Acknowledgments go to Jussi Ronkainen and Jari Rehu who helped in the definition of the use-case definition.

Conflicts of Interest

The authors declare no conflicts of interest.

Notes

1
https://internationaldataspaces.org/ (accessed on 18 September 2024).
2
https://gaia-x.eu/ (accessed on 18 September 2024).

References

  1. Latvakoski, J.; Heikkinen, J. A Trustworthy communication hub for Cyber-physical systems. Future Internet 2019, 11, 211. [Google Scholar] [CrossRef]
  2. Latvakoski, J.; Roelands, M.; Tilvis, M.; Genga, L.; Santos, G.; Marreiros, G.; Vale, Z.; Hoste, L.; van Reamdonck, W.; Zannone, N. Horisontal Solutions for Cyber-Physical Systems Evaluated in Energy Flexiblity and Traffic Accident Cases. M2Mgrids Project Publication. 34p. Available online: https://itea3.org/project/result/download/7344/Horizontal%20Solutions%20for%20Cyber-Physical%20Systems%20evaluated%20in%20Energy%20Flexibility%20and%20Traffic%20Accident%20cases.pdf (accessed on 18 September 2024).
  3. Latvakoski, J.; Mäki, K.; Ronkainen, J.; Julku, J.; Koivusaari, J. Simulation-Based Approach for Studying the Balancing of Local Smart Grids with Electric Vehicle Batteries. Systems 2015, 3, 81–108. [Google Scholar] [CrossRef]
  4. Otto, B.; ten Hompel, M.; Wrobel, S. Designing Data Spaces, 1st ed.; Springer: Cham, Switzerland, 2022. [Google Scholar] [CrossRef]
  5. Siska, V.; Karagiannis, V.; Drobics, M. Building a Dataspace: Technical Overview. Gaia-X Hub Austria. 2023. Available online: https://www.gaia-x.at/en/press/download-whitepaper-building-a-dataspace-technical-overview-2/ (accessed on 7 July 2023).
  6. Gaia-X Trust Framework—22.04 Release. 2022. Available online: https://gaia-x.eu/wp-content/uploads/2022/05/Gaia-X-Trust-Framework-22.04.pdf (accessed on 27 April 2023).
  7. Gaia-X—Architecture Document—22.04 Release. 2022. Available online: https://gaia-x.eu/wp-content/uploads/2022/06/Gaia-x-Architecture-Document-22.04-Release.pdf (accessed on 27 April 2023).
  8. Rødseth, Ø.J.; Berre, A.J. From digital twin to maritime data space: Transparent ownership and use of ship information. In Proceedings of the 13th International Symposium on Integrated Ship’s Information Systems & Marine Traffic Engineering Conference ISIS—MTE 2018, Berlin, Germany, 27–28 September 2018. [Google Scholar]
  9. Sarabia-Jácome, D.; Lacalle, I.; Palau, C.E.; Esteve, M. Enabling Industrial Data Space Architecture for Seaport Scenario. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), Limerick, Ireland, 15–18 April 2019; pp. 101–106. [Google Scholar] [CrossRef]
  10. Sarabia-Jácome, D.; Palau, C.E.; Esteve, M.; Boronat, F. Seaport Data Space for Improving Logistic Maritime Operations. IEEE Access 2020, 8, 4372–4382. [Google Scholar] [CrossRef]
  11. Endsley, M.R. Toward a theory of situation awareness in dynamic systems. Hum. Factors 1995, 37, 32–64. [Google Scholar] [CrossRef]
  12. Munir, A.; Aved, A.; Blasch, E. Situational Awareness: Techniques, Challenges, and Prospects. AI 2022, 3, 55–77. [Google Scholar] [CrossRef]
  13. Endsley, M. Designing for situation awareness in complex system. In Proceedings of the Second International Workshop on Symbiosis of Humans, Artifacts and Environment, Kyoto, Japan, 12 November 2001. [Google Scholar]
  14. Ortiz, G.; Zouai, M.; Kazar, O.; Garcia-de-Prado, A.; Boubeta-Puig, J. Atmosphere: Context and situational-aware collaborative IoT architecture for edge-fog-cloud computing. Comput. Stand. Interfaces 2022, 79, 103550. [Google Scholar] [CrossRef]
  15. Trilles, S.; González-Pérez, A.; Huerta, J. An IoT Platform Based on Microservices and Serverless Paradigms for Smart Farming Purposes. Sensors 2020, 20, 2418. [Google Scholar] [CrossRef] [PubMed]
  16. Parvar, H.; Fesharaki, M.N.; Moshiri, B. Shared situation awareness architecture (S2A2) based on multi-resolutional level architecture. Int. J. Innov. Comput. Inf. Control 2014, 10, 183–210. [Google Scholar]
  17. Apache Spark. Available online: https://spark.apache.org/ (accessed on 23 May 2024).
  18. Apache Flink. Available online: https://flink.apache.org/ (accessed on 23 May 2024).
  19. Apache Storm. Available online: https://storm.apache.org/ (accessed on 23 May 2024).
  20. InfluxDB and Kapacitor. Available online: https://www.influxdata.com/ (accessed on 23 May 2024).
  21. Esper Complex Event Processing (CEP) Engine. Available online: https://www.espertech.com/esper/ (accessed on 23 May 2024).
  22. Azure Stream Analytics. Available online: https://azure.microsoft.com/en-us/products/stream-analytics (accessed on 23 May 2024).
  23. Amazon Web Services Internet of Things Analytics. Available online: https://aws.amazon.com/iot-analytics/ (accessed on 23 May 2024).
  24. Park, C.Y.; Laskey, K.B.; Costa, P.C.G.; Matsumoto, S. Multi-Entity Bayesian Networks Learning for Hybrid Variables in Situation Awareness. In Proceedings of the 16th International Conference on Information Fusion, Istanbul, Turkey, 9–12 July 2013. [Google Scholar]
  25. Damarla, T. Hidden Markov model as a framework for situational awareness. In Proceedings of the 11th International Conference on Information Fusion, Cologne, Germany, 30 June–3 July 2008; pp. 1–7. [Google Scholar]
  26. Zhang, J.; Jia, Y.; Zhu, D.; Hu, W.; Tang, Z. Study on the Situational Awareness System of Mine Fire Rescue Using Faster Ross Girshick-Convolutional Neural Network. IEEE Intell. Syst. 2020, 35, 54–61. [Google Scholar] [CrossRef]
  27. Nadj, M.; Maedche, A.; Schieder, C. The effect of interactive analytical dashboard features on situation awareness and task performance. Decis. Support Syst. 2020, 135, 113322. [Google Scholar] [CrossRef] [PubMed]
  28. Microsoft Power BI. Available online: https://www.microsoft.com/en-us/power-platform/products/power-bi (accessed on 23 May 2024).
  29. Tableau. Available online: https://www.tableau.com/ (accessed on 23 May 2024).
  30. Apache Superset. Available online: https://superset.apache.org/ (accessed on 23 May 2024).
  31. ArcGIS. Available online: https://www.arcgis.com/index.html (accessed on 23 May 2024).
  32. Google Data Studio. Available online: https://lookerstudio.google.com/overview (accessed on 23 May 2024).
  33. D3-Data Driven Documents. Available online: https://d3js.org/ (accessed on 23 May 2024).
  34. Apache Echarts. Available online: https://echarts.apache.org/en/index.html (accessed on 23 May 2024).
  35. Klapita, V. Implementation of Electronic Data Interchange as a Method of Communication Between Customers and Transport Company. Transp. Res. Procedia 2021, 53, 174–179. [Google Scholar] [CrossRef]
  36. Narayanan, S. Electronic Data Interchange: Research Review and Future Directions. Decis. Sci. 2009, 40, 121–163. [Google Scholar] [CrossRef]
  37. What Is Electronic Data Interchange (EDI)? Available online: https://www.ibm.com/topics/edi-electronic-data-interchange (accessed on 21 February 2024).
  38. Ehsan, A.; Abuhaliqa, M.A.M.E.; Catal, C.; Mishra, D. RESTful API Testing Methodologies: Rationale, Challenges, and Solution Directions. Appl. Sci. 2022, 12, 4369. [Google Scholar] [CrossRef]
  39. Ortiz, G.; Boubeta-Puig, J.; Criado, J.; Corral-Plaza, D.; Garciade-Prado, A.; Medina-Bulo, I.; Iribarne, L. A microservice architecture for real-time IoT data processing: A reusable Web of things approach for smart ports. Comput. Stand. Interfaces 2022, 81, 103604. [Google Scholar] [CrossRef]
  40. Velepucha, V.; Flores, P. A Survey on Microservices Architecture: Principles, Patterns and Migration Challenges. IEEE Access 2023, 11, 88339–88358. [Google Scholar] [CrossRef]
  41. Yu, Y.; Silveira, H.; Sundaram, M. A microservice based reference architecture model in the context of enterprise architecture. In Proceedings of the IEEE Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), Xi’an, China, 3–5 October 2016; pp. 1856–1860. [Google Scholar]
  42. Almeida, W.H.C.; de Aguiar Monteiro, L.; Hazin, R.R.; de Lima, A.C.; Ferraz, F.S. Survey on Microservice Architecture -Security, Privacy and Standardization on Cloud Computing Environment. In Proceedings of the Twelfth International Conference on Software Engineering Advances, Athens, Greece, 8–12 October 2017; pp. 199–205. [Google Scholar]
  43. Microsoft. Microsoft Entra Verified ID Documentation. Available online: https://learn.microsoft.com/en-us/entra/verified-id/ (accessed on 14 June 2023).
  44. Walt.Id. Walt.Id Web Wallet Kit Documentation. Available online: https://docs.walt.id/v/web-wallet/wallet-kit/readme (accessed on 19 June 2023).
  45. Walt.Id. Walt.Id SSI Kit Documentation. Available online: https://docs.walt.id/v/ssikit/ssi-kit/readme (accessed on 20 June 2023).
  46. Trinsic. Trinsic Documentation. Available online: https://docs.trinsic.id/ (accessed on 15 June 2023).
  47. Blasch, E.; Xu, R.; Nikouei, S.; Chen, Y. A study of lightweight DDDAS architecture for real-time public safety applications through hybrid simulation. In Proceedings of the Winter Simulation Conference, National Harbor, MD, USA, 8–11 December 2019. [Google Scholar]
  48. Saragih, T.; Tanuwijaya, E.; Wang, G. The Use of Blockchain for Digital Identity Management in Healthcare. In Proceedings of the 10th International Conference on Cyber and IT Service Management (CITSM), Yogyakarta, Indonesia, 20–21 September 2022. [Google Scholar]
  49. Available online: https://unity.com/ (accessed on 23 May 2024).
  50. Xu, H.; Berres, A.; Yoginath, S.B.; Sorensen, H.; Nugent, P.J.; Severino, J.; Tennille, S.A.; Moore, A.; Jones, W.; Sanyal, J. Smart Mobility in the Cloud: Enabling Real-Time Situational Awareness and Cyber-Physical Control Through a Digital Twin for Traffic. IEEE Trans. Intell. Transp. Syst. 2023, 24, 3145–3156. [Google Scholar] [CrossRef]
  51. Jiang, D.; Huang, R.; Calyam, P.; Gillis, J.; Apperson, O.; Chemodanov, D.; Demir, F.; Ahmad, S. Hierarchical cloud-fog platform for communication in disaster incident coordination. In Proceedings of the 2019 7th IEEE International Conference on Mobile Cloud Computing, Services, and Engineering, MobileCloud, Newark, CA, USA, 4–9 April 2019; pp. 1–7. [Google Scholar] [CrossRef]
  52. Leibold, P.; Al Abri, O. An integrated web-based approach for near real-time mission monitoring. In Proceedings of the 2019 1st International Conference on Unmanned Vehicle Systems-Oman (UVS), Muscat, Oman, 5–7 February 2019. [Google Scholar] [CrossRef]
  53. Shaw, D.R.; Grainger, A.; Achuthan, K. Multi-level port resilience planning in the UK: How can information sharing be made easier? Technol. Forecast. Soc. Chang. 2017, 121, 126–138. [Google Scholar] [CrossRef]
  54. Adams, N.; Chisnall, R.; Pickering, C.; Schauer, S.; Peris, R.C.; Papagiannopoulos, I. Guidance for ports: Security and safety against physical, cyber and hybrid threats. J. Transp. Secur. 2021, 14, 197–225. [Google Scholar] [CrossRef]
  55. Olesen, P.B.; Hvolby, H.H.; Dukovska-Popovska, I. Enabling Information Sharing in a Port. In Advances in Production Management Systems; Emmanouilidis, C., Taisch, M., Kiritsis, D., Eds.; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
Figure 1. A view to the main research question—a sample view of the problems of secure data sharing for situational awareness of a critical CPS area. The situational awareness representation in the figure is based on the Endsley model [11].
Figure 1. A view to the main research question—a sample view of the problems of secure data sharing for situational awareness of a critical CPS area. The situational awareness representation in the figure is based on the Endsley model [11].
Systems 12 00389 g001
Figure 2. Sequence diagram depicting the sequence of events to handle a port accident scenario: forklift catches fire while loading a docked ship.
Figure 2. Sequence diagram depicting the sequence of events to handle a port accident scenario: forklift catches fire while loading a docked ship.
Systems 12 00389 g002
Figure 3. Core components of a dataspace.
Figure 3. Core components of a dataspace.
Systems 12 00389 g003
Figure 4. Data exchange between a provider and a consumer happens in a peer-to-peer fashion. The provider enforces data usage policies before transferring data to the consumer. The data exchange transaction is recorded to the logging service by both the provider and the consumer, which could be used for conflict resolution in the future.
Figure 4. Data exchange between a provider and a consumer happens in a peer-to-peer fashion. The provider enforces data usage policies before transferring data to the consumer. The data exchange transaction is recorded to the logging service by both the provider and the consumer, which could be used for conflict resolution in the future.
Systems 12 00389 g004
Figure 5. A view to securing situational awareness.
Figure 5. A view to securing situational awareness.
Systems 12 00389 g005
Figure 6. Dataspace core architecture. The technical environment of each dataspace participant includes a data asset layer, web application programing interfaces with access and usage policies, a local catalog of data offerings, self-description of the participant itself and their offerings, a verifiable credentials service, and a credential wallet. Moreover, dataspace federation services include the verifiable credentials service, a message broker hub, a registry service, and a federated catalog service.
Figure 6. Dataspace core architecture. The technical environment of each dataspace participant includes a data asset layer, web application programing interfaces with access and usage policies, a local catalog of data offerings, self-description of the participant itself and their offerings, a verifiable credentials service, and a credential wallet. Moreover, dataspace federation services include the verifiable credentials service, a message broker hub, a registry service, and a federated catalog service.
Systems 12 00389 g006
Figure 7. Flow of acquiring dataspace membership.
Figure 7. Flow of acquiring dataspace membership.
Systems 12 00389 g007
Figure 8. Synchronization of the federated catalog with the local catalog of a data provider.
Figure 8. Synchronization of the federated catalog with the local catalog of a data provider.
Systems 12 00389 g008
Figure 9. Process of data exchange contract between the data provider and the consumer.
Figure 9. Process of data exchange contract between the data provider and the consumer.
Systems 12 00389 g009
Figure 10. Process of data exchange between the data provider and the consumer.
Figure 10. Process of data exchange between the data provider and the consumer.
Systems 12 00389 g010
Figure 11. Catalog list example of a data provider.
Figure 11. Catalog list example of a data provider.
Systems 12 00389 g011
Figure 12. Form to create a data catalog entry.
Figure 12. Form to create a data catalog entry.
Systems 12 00389 g012
Figure 13. Image of the basic harbor photogrammetry mesh with no textures highlighted inside Blender.
Figure 13. Image of the basic harbor photogrammetry mesh with no textures highlighted inside Blender.
Systems 12 00389 g013
Figure 14. A view to the 3D model of the Oulu Port area inside the Unity engine.
Figure 14. A view to the 3D model of the Oulu Port area inside the Unity engine.
Systems 12 00389 g014
Figure 15. Unity editor view of Sim Object Data script and its associated variables on a simulation object. For simplicity, variables only handled as integers and strings.
Figure 15. Unity editor view of Sim Object Data script and its associated variables on a simulation object. For simplicity, variables only handled as integers and strings.
Systems 12 00389 g015
Figure 16. Simulated emergency services stakeholder inside Unity. Shows the data received by the emergency services from other stakeholders at emergency level 4.
Figure 16. Simulated emergency services stakeholder inside Unity. Shows the data received by the emergency services from other stakeholders at emergency level 4.
Systems 12 00389 g016
Figure 17. Snippet of scenario manager script defining what happens at phase four of the scenario.
Figure 17. Snippet of scenario manager script defining what happens at phase four of the scenario.
Systems 12 00389 g017
Figure 18. Snippet from the data manager script.
Figure 18. Snippet from the data manager script.
Systems 12 00389 g018
Figure 19. Basic overview of the player view and user interface.
Figure 19. Basic overview of the player view and user interface.
Systems 12 00389 g019
Figure 20. Dashboard application landing page.
Figure 20. Dashboard application landing page.
Systems 12 00389 g020
Figure 21. Supervisory view of the map of the Port of Oulu with markers.
Figure 21. Supervisory view of the map of the Port of Oulu with markers.
Systems 12 00389 g021
Figure 22. Pop-up with the attributes of a map marker.
Figure 22. Pop-up with the attributes of a map marker.
Systems 12 00389 g022
Figure 23. A view of the simulation set-up for validation. The upper part shows digital twin of the simulation set-up, with the port environment view, simulated user interface of a single person acting with a vehicle, and supervisory dashboard showing the presence and locations of the physical assets included in the simulation. The lower part visualizes the five simulated stakeholders’ systems and federated dataspace services for data sharing required by the execution of the port accident scenario.
Figure 23. A view of the simulation set-up for validation. The upper part shows digital twin of the simulation set-up, with the port environment view, simulated user interface of a single person acting with a vehicle, and supervisory dashboard showing the presence and locations of the physical assets included in the simulation. The lower part visualizes the five simulated stakeholders’ systems and federated dataspace services for data sharing required by the execution of the port accident scenario.
Systems 12 00389 g023
Figure 24. A sample dataspace membership-related verifiable credential within the OpenID Connect log in endpoint.
Figure 24. A sample dataspace membership-related verifiable credential within the OpenID Connect log in endpoint.
Systems 12 00389 g024
Table 1. Comparison of technical features of Walt.id, Trinsic.id, and Microsoft Entra Verified ID (Data as of 26 June 2023) [43,44,45,46].
Table 1. Comparison of technical features of Walt.id, Trinsic.id, and Microsoft Entra Verified ID (Data as of 26 June 2023) [43,44,45,46].
Walt.idTrinsicMicrosoft Entra Verified ID
Open sourceYes, Apache 2 licenseNo,
Based on Hyperledger Aries open-source project
No,
Part of Microsoft’s Azure cloud platform
Ecosystem partners/SUPPORTEBSI/ESSIF, IOTA, OIDC, and moreOIDCOIDC
PricingFree
They offer paid services for Enterprise and Software as a Service (SaaS) cloud platforms.
Free plan sufficient for this project scope. For larger projects, pricing variesIncluded free in any Azure AD (active directory) subscription.
Supports multitenancyxxx
Language supportJava and KotlinPython, Java, Go, C#, and TypeScript
APIYesYes
StorageStorage kit: external storage kit that can be combined with the wallet and SSI kit. Runs locally or in cloud. Holds documents (VCs, for example)
Available supportDiscord server for questions and issues.
Premium support for Enterprise- and cloud platform-paid plans
“Build plan”—plan has Slack channel support.
Custom Build-plan has 24/7 support
Azure support (Azure support plans), Microsoft support for business, and Microsoft Q&A
Gaia-X supportYes
Compliance witheIDAS, ESSIF, GDPR, W3C, and DIFW3C, ToIP, and DIFHas their own VC display model
Table 2. Evaluation of Walt.id, Trinsic.id, and Microsoft Entra Verified ID (Data as of 26 June 2023) [43,44,45,46].
Table 2. Evaluation of Walt.id, Trinsic.id, and Microsoft Entra Verified ID (Data as of 26 June 2023) [43,44,45,46].
SDKsProsCons
Walt.idOpen source, wide functionality, lots of supported ecosystems, compliance with many standards, can export Json Web Tokens (JWTs) and secret keys used for the creation of the DID, which helps in our case (reuse of the JWT presentation of the VCs)Ambiguous documentation -> not so easy to find correct ways to implement to own projects, support was lacking, multiple modules included, and not very clearly explained on how to combine these modules (e.g., Self Sovereign Identity(SSI) and wallet kits); issue is how to implement it to our needs.
Trinsic.idGreat programming language and SDK support, straightforward implementation, all-in-one infrastructure, OIDC supportOnly the paid plan includes freer DID creation, cannot export VCs as JWTs? Cannot export DIDs or keys? Cannot export keys that were used for creating the DID?
Microsoft Entra Verified IDControlling all necessary functions made easy through Azure AD, Entra Permissions management support control over any identity and resource within Azure, AWS, and GCPImplementation not possible if not using Azure AD services?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Latvakoski, J.; Umer, A.; Nykänen, T.; Tihinen, J.; Talman, A. A Simulation-Based Study on Securing Data Sharing for Situational Awareness in a Port Accident Case. Systems 2024, 12, 389. https://doi.org/10.3390/systems12100389

AMA Style

Latvakoski J, Umer A, Nykänen T, Tihinen J, Talman A. A Simulation-Based Study on Securing Data Sharing for Situational Awareness in a Port Accident Case. Systems. 2024; 12(10):389. https://doi.org/10.3390/systems12100389

Chicago/Turabian Style

Latvakoski, Juhani, Adil Umer, Topias Nykänen, Jyrki Tihinen, and Aleksi Talman. 2024. "A Simulation-Based Study on Securing Data Sharing for Situational Awareness in a Port Accident Case" Systems 12, no. 10: 389. https://doi.org/10.3390/systems12100389

APA Style

Latvakoski, J., Umer, A., Nykänen, T., Tihinen, J., & Talman, A. (2024). A Simulation-Based Study on Securing Data Sharing for Situational Awareness in a Port Accident Case. Systems, 12(10), 389. https://doi.org/10.3390/systems12100389

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop