Next Article in Journal
A Pattern-Based Framework for Automated Migration of Monolithic Applications to Microservices
Previous Article in Journal
Exploring the Application and Characteristics of Homomorphic Encryption Based on Pixel Scrambling Algorithm in Image Processing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Digital Twin Threat Survey

by
Manuel Suárez-Román
1,2,
Mario Sanz-Rodrigo
3,
Andrés Marín-López
3 and
David Arroyo
1,*
1
Instituto de Tecnologías Físicas y de la Información, Consejo Superior de Investigaciones Científicas, 28006 Madrid, Spain
2
School of Engineering, Science and Technology, UNIE Universidad, Calle Arapiles, 14, 28015 Madrid, Spain
3
ETSI Telecomunicación, Universidad Politécnica de Madrid (UPM), Avda. Complutense 30, 28040 Madrid, Spain
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2025, 9(10), 252; https://doi.org/10.3390/bdcc9100252
Submission received: 14 August 2025 / Revised: 25 September 2025 / Accepted: 28 September 2025 / Published: 2 October 2025

Abstract

Virtual and digital twins are means of high value to characterize, model and control physical systems, providing the basis for a simulation environment and lab. In the case of a digital twin, it is possible to have a replica of a physical environment by means of reliable sensor networks and accurate data. In this paper we analyse in detail the threats to the reliability of the information extracted from these sensor networks, along with a set of challenges to guarantee data liveness and trustworthiness.

1. Introduction

Digital twin (DT) technology has emerged as a promising solution to improve the efficiency and performance of certain complex physical systems, focusing more on those found in industrial and critical infrastructure environments. This technology consists of creating a virtual replica of a physical asset, which is commonly known as the digital twin, using real-time data collected from sensors or other sources. The DT paradigm can be used to simulate, validate, and optimize the performance of the physical counterpart; to test and evaluate changes or improvements to the system before applying them in order to check potential critical failure points; and even to restore the system to a normal operation point after an incident.
DT technology emerged in the latter years of the 20th century, when the first mathematical models where used to describe the behaviour of complex engineering problems [1]. As with every other technology, it has evolved a lot since its first steps and, consequently, we can differentiate between different typologies of DT, each with specific characteristics and applications. The first historically developed ones are called Virtual Twins (VT). A VT is a simplified or abstracted digital model of a certain physical asset or system that can be used for simulation, training, and testing purposes. As global technology evolved and more efficient computation systems arose, DT surpassed VT. A DT is a highly accurate virtual model of a physical asset or system, commonly employed in healthcare, transportation, and manufacturing. But this approach seemed to be outdated as new technologies for gathering data from physical assets were developed. There was a need to evolve the technology further and incorporate the benefits of the continuous synchronization between the data of the real world and that obtained from simulation rather than the static model generated in a DT. Consequently, Hybrid Twin technology emerged. Hybrid Twins establish a permanent bridge between the physical and digital worlds in real time using a combination of physical sensors and virtual models, often used in autonomous vehicles and other complex systems. Finally, a physical twin (PT) refers to the physical asset that is intended to be simulated [1].
The use of the latest DT technology—what in the last paragraph was referred to as Hybrid Twins—holds great promise for enhancing the performance and efficiency of complex physical processes. However, it is important to recognize that proper security measures must be taken to ensure their reliability and effectiveness. Due to their capacity to offer real-time insights into the behaviour and performance of physical systems, DTs have quickly gained popularity in recent years, becoming a promising technology for improving the efficiency and effectiveness of industrial and critical infrastructure systems. They are employed in many different fields, such as manufacturing, healthcare, transportation, and smart grids, to name a few; see Jafari et al. [2].

1.1. Structure of Modern Digital Twins

As mentioned in the previous section, digital twins form complex systems where information must be distributed accurately and efficiently. Thus they are composed of multiple services, each with its own key function in the process. To understand more fully the security risks associated with DT systems, they have traditionally been divided into four layers, as stated in [3,4] and shown in Figure 1. In this figure, we can see the different layers and components of a DT. Each of the layers is divided by a dashed vertical line, forming a layer sequence with the exception of the data management and synchronization layer, present on both the bidirectional data flow between data acquisition and data modelling layers and on the data transmitted from the data modelling layer to the data visualization layer. In yellow colour, we have represented where the hardware components are located on the scheme. In blue we have identified all the stages in which data is involved. In green we mark the position of the AI and software components of a typical digital twin structure. DT layers match with the ones described by the International Standard Organization (ISO) [5]:
  • Data Acquisition Layer
    The data acquisition layer is the first layer that takes part in the flow of information generated by a DT. It is responsible for collecting data from sensors attached to physical objects. As underlined in [6], the implementation of DT demands a vast amount of data collected through sensors, embedded systems, offline measurement and sampling inspection methods over physical entities.
  • Data Management and Synchronization Layer
    After successfully gathering all the information available from the physical device, it is time for the data to be managed and synchronized. In this layer, all the storing and processing operations of the data collected from the physical object and the transmission from the physical device to the digital part of the DT take part.
  • Data Modelling Layer
    Once the information flow has arrived to the DT, it is time to process that information in order to obtain the useful output desired when implementing the paradigm. The data modelling layer is then responsible for creating and updating the DT based on the data collected from the physical object on the first layer and transmitted on the second one. At this level the Artificial Intelligence (AI) governing the model can take its own conclusions from the data received and the actual state of the model, which provides a highly valuable asset for companies in order to automate engineering and management operations [7]. In addition, the adoption of modern techniques such as AI has been identified as a key factor in the growth and use of digital twins [8,9,10]. In this section and the rest of this work we will refer to Artificial Intelligence as it is defined in the proposed EU AI Act: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” [11].
  • Data Visualization Layer
    After all the data gathering and processing has occurred, the DT shows to its respective operator all the conclusions obtained after the information flow. Traditionally, the operator following the organization policies and rules, after carefully analysing this representations, takes the consequent decisions on the corresponding physical system. In modern DT systems, this role is progressively assumed by the AI, leaving to the operator a monitoring role.
Once the structure of a DT has been fully understood, we identify three key components within its entirety: hardware, data, and Artificial Intelligence. These elements interact with each other to achieve the primary objective of a digital twin, which is to represent a real-world object or chain of objects within a manufacturing process. Each component plays a distinct role:
  • Hardware: Attached to the real-world object, the physical twin is responsible for gathering all the necessary information to virtualize it. It is clearly connected to Internet of Things (IoT) devices, whose growth is significantly contributing to the adoption of the digital twin paradigm as a standard in manufacturing processes for Industry 4.0 [10,12].
  • Data: This refers to the information exchanged at every stage of the digital twin process. It encompasses data generated by the hardware components in the data acquisition layer, as well as data resulting from transformations made in the rest of the layers.
  • Artificial Intelligence and Software: This encompasses all software components within the digital twin paradigm. Of particular note is the AI system, which can vary widely, from deep learning techniques [13] or Generative Adversary Networks (GANs) [14] to more classical Machine Learning algorithms such as Random Forest [15] or Support Vector Machine [16], among others, all of them playing a key role in the representability of the generated digital twin, as AI techniques are used to collect, analyse, and test the data from a physical object to build its digital copy [17,18].

1.2. Methodology

In order to formulate our research queries, we are going to use PICOC methodology. PICOC was firstly defined by Mark Petticrew and Helen Roberts [19] in 2006 and stands for Population, Intervention, Comparison, Outcomes and Context. PICOC has been proved to be effective on scientific research related to technological areas like Artificial Intelligence [20] or privacy-enhancing technologies [21] among others. With this methodology, we must address five main aspects:
  • The Population in which the evidence is being collected, the group of entities that are of interest for the review.
  • The Intervention applied in the study, more specifically, which technology is in the scope of this study or which area related to that technology is being reviewed.
  • The Comparison to the previously identified intervention.
  • The Outcomes we would like to achieve by developing this review.
  • The Context of the study.
This theory states that PICOC elements are important in a focused search for evidence, and that each change in them can lead to the obtainment of different evidence and, therefore, different search outcomes. For its application, we must ask ourselves five questions, one per PICOC element, assigning each to one or several keywords associated with the answer to the question, as shown in Table 1. As our study is not aiming to compare our outcomes with any other, we have discarded the Comparison question as it is meaningless.
Using the attributes derived from PICOC, we can now formulate the research queries (RQ) we would like to focus on. All of them aim to obtain a more securized industrial and non-industrial environment where digital twin technology is being implemented
  • RQ1: Which are the main cybersecurity risks to the digital twin paradigm?
  • RQ2: Is it feasible to employ a risk analysis framework from different computation paradigm systems in a DT system?

1.3. Search Process

After having analysed the scope of this review and formulated the corresponding search queries, we have determined the search strings that we can use for finding relevant publications to the research questions. Those search strings can be obtained either from the obtained PICOC table or from the research queries themselves or by considering variations, synonyms and abbreviations. Closely related terms have also been taken into account in order to cover a wider range of publications. We only consider those results with the search strings contained in the title, the abstract or the keywords of the article. In addition, sometimes a combination of AND–OR clauses has been used. Only publications from more recently than 2019 have been taken into account. The search strings were obtained by combining two sets of strings:
  • General strings: “digital twin”, “digital twins”, “digital twinning”.
  • Additional strings: “attack”, “attestation”, “cyberattacks”, “cyberthreat”, “dependability”, “dependable”, “disaster”, “honey”, “honeypot”, “intrusion detection”, “malware”, “quantum computing”, “quantum”, “ransomware”, “reliability”, “remote safety”, “risks”, “threats”, “trustworthiness”, “verification”.
Papers were obtained by querying the selected database using the combination of each of the strings in the set “General Strings” with each of the strings in the set “Additional Strings”. Figure 2 shows an example string used to query the Scopus database and find scientific publications related to malware attacks in DTs. For such a goal, we have specified that the words digital, twin and malware must be present in the publication title, its abstract or its keywords and we limited the search to those fully open access, alongside the already discussed language and year restrictions.
The scientific database used for extracting the documents is Scopus as it is considered sufficient for covering a wide and sufficient range of publications in such a way that the following analysis can be performed.

1.4. Study Selection Criteria

Figure 3 summarizes the selection-criteria for the papers in this survey. The total amount of scientific publications found with this methodology was 9261. This database was reviewed and some of its elements removed following some criteria, as stated in the following diagram:
Exclusion criteria:
  • We have removed duplicated articles leaving us with a total of 3244 articles.
  • We have deleted those publications with no identified DOI.
  • We have removed those publications not publicly available. For checking this, we have used the Unpaywall Database [22]. This step left us with a total number of 1204 open access publications.
  • Only publications written in English and Spanish have been considered.
  • The publication must answer at least one research question; otherwise, it has been excluded.
After the complete process, around 200 articles were finally selected and took into consideration for the writing of this survey. The selected articles were finally qualitatively reviewed without applying any sort of quantitative analysis.

2. Digital Twins’ Risks

As we have already seen in Section 1, the DT paradigm represents a promising technology that aims to boost industrial performance. However, although it is being implemented by more and more companies and institutions in their manufacturing processes, hazards and threats exist for all of them, just as for any other digital technology. The continued development and the adoption of DT technology will likely be an important focus of research and innovation in the years ahead.
Threats to DT are wide; they range from cyber-attacks and data breaches to privacy violations, and they can affect their integrity, availability, and confidentiality, compromising the DT paradigm as a result. In addition, sensitive data, such as intellectual property, can be stolen or lost as a result of data leaks, which can have very serious consequences for the organization that has implemented the DT. From this arises the importance of securely protecting the whole DT paradigm against the privacy violations that could happen, which can potentially expose sensitive data. Other DT threats’ repercussions are monetary losses or altercations in the physical facilities of the owner of the DT. An attack on a DT deployed in a manufacturing process can be the source of stopping output and result in large financial losses. Similar to this, a healthcare DT data breach, an area where DTs are being used more and more frequently, might expose patients’ private data. In this manner, understanding potential hazards to DT is therefore crucial, as is taking the necessary precautions to lessen them.
Attackers’ main goals are to effectively compromise DTs and thus also affect those systems who rely on this technology. An attacker who gains access to a DT may exploit it for taking control of the physical device the DT is linked to. This might give the attacker an unwanted authority to have an adverse effect on the physical system, harming the business or the people who depend on it.
For instance, if an attacker gains access to the DT that regulates the temperature of a certain facility, they will be able to use it to raise the temperature to dangerous levels, endangering the health of everyone inside or taking advantage of that situation for another type of attack they may have planned previously.
Cyber criminals with easy access to exploiting vulnerabilities of DTs may be able to steal private information or other intellectual property when they successfully gain access to a DT. In the field of science, both researchers and developers regularly use DT-based systems, and attackers who gain access to them might be able to steal confidential data or priceless intellectual property and, as a result, the attacker will have a significant tactical advantage or will be able to sell the data that have been stolen.
Given the potential benefits for criminals who successfully compromise DTs, it is imperative to take effort to appropriately secure them. This entails putting in place all the security controls established by reference frameworks such as NIST CSF-2.0 [23], which advises to authenticate all users and entities, protect the confidentiality and integrity of all the information that is fed from the sensors to DT, and to conduct regular evaluations of logs and security trails to detect any potential indication of an attack or any security/safety problem. Indeed, those that rely on DTs may suffer deeply as a result of inadequate security, and attackers may gain a major advantage.
For understanding the different threats a digital twin can have, one must take into account the morphology of the DT. Each layer that forms a DT presents a range of potential threats that can cause the malfunction of the whole system, as in every other complex informatics system [24]. With this perspective, Alcaraz et al. [4] present a classification of security threats based on the five main components of a digital twin—Industrial Internet of Things, computing infrastructure, virtualization systems, computing techniques, and Human–Machine interfaces—linking them to a series of operational requirements previously defined [25,26,27]. In Neshenko et al. [28], a different approach is considered, and threats are only divided taking into consideration the target of them: confidentiality and authentication, data integrity and availability. Additionally, they considered different aspects of the IoT ecosystem, such as layer vulnerabilities, security impacts, countermeasures or situational awareness capabilities.
Mullet et al. [29] conducted a comprehensive review of the cybersecurity guidelines that must be followed by manufacturing companies for implementing safe digital twin ecosystems and the threats linked to loss of availability, integrity and confidentiality attending to the business impact they potentially posses.
In this paper we have grouped together the information on these works and extended the list in order to create a list of security threats to digital twins based on the layers that compose their morphology, as defined in Section 1.1.

2.1. Hardware Threats and Countermeasures

The main threat to the DT paradigm concerning the hardware process generation is the fact that an attacker can compromise the multiple sensors employed to gather data. As it is highlighted in [23], endpoint verification must be regularly conducted to prevent the physical parts of a DT from being supplanted or spoofed [30,31]. In this regard, it is possible to leverage different secure device onboarding protocols and secure device onboarding [32,33]. It is also necessary to apply Remote Attestation to protect sensors and endpoints from malware. This security technique allows the integrity of the platform to be verified and unauthorized modifications to be detected, using cryptographic measures to ensure that the platform has not been compromised. However Remote Attestation schemes are difficultly scalable [34], and this can be a severe problem for complex DT systems. Thus, collective Remote Attestation schemes have been proposed for fixing this issue.
Collective Remote Attestation is a variant of the Remote Attestation scheme that allows the detection of coordinated attacks and facilitating the analysis of suspicious behaviour. The main difference with Remote Attestation is that instead of verifying the integrity of a single DT, a joint verification of several of them is performed. This technique is more suitable in environments where multiple DTs are used and a higher detection and response capability is required against more sophisticated attacks. In addition, as mentioned before, collective Remote Attestation also offers greater efficiency and scalability compared to Remote Attestation, making it a more viable option for application in large industrial environments when deploying a DT.
The collective Remote Attestation scientific community typically differentiate five types of attackers [35,36]: the Software Adversary, who has the ability to execute malicious code on a device; the Mobile Software Adversary, capable of compromising a device’s software configuration and removing any trace of their presence on it, the Physical Non-Intrusive Adversary, who can infer device information; the Stealthy Physical Intrusive Adversary, capable of capturing a device and attempting to extract information from it; and the Physical Intrusive Adversary, who additionally has the ability to introduce external hardware into it.
Ambrosin et al. [37] studied different CRA schemes designed and outlined the type of mitigation capabilities against every type of adversary previously discussed between other characteristics. As a result, ERASMUS [35] and SeED [38] stood out among the rest, being only vulnerable to Physical Non-Intrusive and Physical Intrusive Adversaries. Both schemes share the same base; they allow the device to authenticate and independently decide the moment when attestation occurs using a certain secure source of time (Real-Time Clock and Attestation Trigger for SeED and a Reliable Read-Only Clock for ERASMUS). Thus, additional Hardware items must be added to physical twins in order to execute this scheme. However they face some security problems like the lack of a DoS Mitigation policy. This issue can be achieved by implementing a blockchain approach such as that proposed in [39], where a mechanism is designed over Hyperledger Sawtooth, or in [40], which uses Hyperledger Fabric. However, blockchain-based collective Remote Attestation schemes can suffer long latency in transaction validation processes, a critical point, as these CRA schemes rely some of their potential on being fast.
Another CRA protocol that can be highlighted is SALAD [41], which also mitigates three different types of adversaries, being the only scheme that provides mechanisms of defence against Physical Non-Intrusive Adversaries. Physical Non-Intrusive Adversaries are capable of stealing information on the devices by only being located physically next to the physical twin, allowing it to mount side-channel attacks. This is a main threat for the DT paradigm, as new techniques are being developed in order to perform this type of attack using Machine Learning and deep learning techniques, what in the literature is referred to as Side-Channel Analysis, using, overall, two surrogate loss functions, Negative Log Likelihood and Mean Square Error, which show that the deep learning approach can be highly suitable for evaluating implementations secure against Side-Channel Attacks from a worst-case scenario point of view [42]. However, the main pitfall of SALAD comes with its runtime, as it is a complex design, which provokes a high overhead, making it only suitable for environments with a small number of nodes.
Hardware components are also vulnerable to suffering reverse engineering attacks [43]. DTs can help to evaluate the divergence between a real model and the one virtualized through a DT. Divergence measures can be used to solve inverse problems related to the inference of hardware design decisions which, in the end, could posit an infringement of intellectual property [44].
In the interleaving of hardware threats and AI threats, we must refer to threats like the framework presented by Clements and Lao in [45], which allows one to introduce hardware Trojan attacks by means of neural networks at the production stage by any third party in the supply chain. Such hardware Trojans must be very stealthy, since infected devices must pass a normal functional test without being detected. This implies that the activation conditions which trigger the Trojan must be rare and specific. Ref. [45] proposes expanding the taxonomy of neural network attacks to incorporate this new type of attack.

2.2. AI Threats and Countermeasures

As it has been explained in Section 1.1, Artificial Intelligence plays an important role in modern digital twin structures, as by applying Machine Learning techniques the system is able to predict the output in a more comprehensive way, thus achieving a better-performing digital twin. However, the use of such Artificial Intelligence techniques implicitly involves a new challenge that must be solved: the potential security flaws arising from the use of AI techniques.
It has already been proved that these techniques are not completely secure and that methodologies, each time more widely spread in industrial environments, have their own critical fault points. One of these methodologies, outsourcing the training process of the Machine Learning model (MLaaS), is becoming especially popular as the high computational capacity needed is not affordable for the companies that develop it. MLaaS in the hands of an untrustful company is prone to unwanted behaviour if manipulated properly, as it is described in [46]: a backdoor can be planted in every neural network trained model that is undetectable at first glance unless every mathematical intricacy of the resultant neural network is studied in detail. In fact, this is an upcoming problem for the Machine Learning industry, and several backdoor algorithms have already been developed using a wide variety of techniques. For instance Wu et al. [47] classify attack algorithms depending on their threat model, establishing a differentiation between those algorithms in which the attacker is able to manipulate the training data, such as BadNets [48] or SSBA [49], and those in which the attacker is able to manipulate both the training process and the data.
Any Machine Learning technique requires an initial data set that will be used to train the model to solve the problem posed. The problem of uncertainty quantification is explained, in a few words, in the fact that for those inputs that the trained model subsequently receives that are the same (or similar to a certain extent) to those provided in the training set, the behaviour is exactly as desired; however, more noticeable deviations are observed in those cases in which the situation that the DT tries to solve is widely different from those provided in the initial dataset, i.e., when a completely new situation is being treated [50]. This is a problem induced by the inherent variation or randomization of the techniques or due to a lack of initial knowledge, and only the second cause can be remedied by adding sufficient knowledge [51]. Thus, any DT based on Bayesian neural networks or Machine Learning techniques such as interval analysis or fuzzy logic is affected by the problem of uncertainty quantification. As a possible solution, Lin et al. [50] propose the concept of meta-learning in the DT learning process, so it favours the generalization and learning capabilities of neural networks by conducting an exhaustive evaluation of different configurations for neural networks and by leveraging expert knowledge on the identification, modelling and practical validation of specific DT tasks. As it is discussed in [50], model selection is carried out by applying sequential model-based optimization, which implies the use of the design of a meta-learner by a surrogate model and the minimization of a performance function through conditional entropy and a bandit-based criterion. It is also recommended to perform a systematic risk analysis—which can be performed probabilistically—using methods capable of demonstrating a high reliability of critical systems and that take into account the differences between testing and operational conditions when using the DT in a production environment.
Finally, it is important to note that, as we are constantly dealing with Machine Learning models, they require an abundant amount of data to be trained and, therefore, it is necessary to consider the privacy needs of each of the data. For this purpose, the main trend that is gaining strength in recent years is to implement federated learning [52], which can be defined as a collaborative Machine Learning technique without the need to centralize the training data. In this methodology, all of the prediction models shared in the different local devices collaborate and maintain their own training data. Once each local device computes its model, it sends a summary of the model to a server that computes all the information globally, generating a better model and keeping the data in the user’s hands [53]. However, while the use of federated learning is a solution to data privacy issues, it is important to note that it makes the models much more vulnerable to attack by a single or multiple malicious participants within the federated structure, who can inject malicious model replacement to introduce unwanted functionalities and backdoors into the final result. In fact, Bagdarsaran et al. [54] analyse the implications of this type of attack and confirm that it is a much more powerful type of attack than classic data poisoning attacks, making the DT network that employs it more vulnerable.

2.3. Threats to the Data Life-Cycle and Countermeasures

Data is generated, transported and consumed. Normally the flow of data goes from the sensors of the physical system to the DT where data is consumed. Data may change the DT state, and this may trigger reactive behaviours to further actuations in the physical system. In this section we analyse threats to the data life-cycle as follows:
  • Threats to data generation
    • Low-Quality Data Threat: In the context of twin technology, this threat can occur in various scenarios, both in the communication between the physical twin and the DT, as well as in communication between DTs. Initially, it can impact the reliability of actions or decisions made by the DT regarding its physical counterpart and can affect inter-twin communications in case collaboration is required.
    • Model Inconsistency Attack: A physically manipulated element can maliciously alter the parameters used in the model training phase, leading to operational issues and even privacy concerns; see Pasquini et al. [55].
    • Model Poisoning Attack: In a collaborative context involving different DTs, an attacker may manipulate the parameters of the AI model with the aim of disrupting collaborative functionality; see Tian et al. [56]. In federated learning environments, local models could be tampered with in an attempt to influence the results of the global model aggregation. The guidelines of [23] to guarantee the reliability of the decision making and taking in DT can be implemented, for example, by means of the deployment of black-box audit using verifiable computation [46]. This approach could help to detect both model inconsistency and poisoning attacks.
    • Threats to Data Backup: Another threat inherited from IT environments pertains to backup data, which serves as a recovery mechanism in the event of data loss or corruption, a necessity to ensure the life-cycle of services associated with DT technology; see Su et al. [57]. Alterations to backup data could result in system malfunction following a restoration operation. On this point, it is necessary to take into account the need to implement encryption backup, and strict access control to all backups, along with regular verification of backup integrity [23].
    • Data/Content Poisoning Attack: This type of threat directly impacts interactions between twins (see Nour et al. [58]), as it can disrupt the system’s operation by injecting malicious data and even interfere with the model training process. Remote Attestation should be used here to detect any attempt to carry out this type of attack. In addition, and according to the guidelines in [23], all information flows should be verified, for example, by creating robust confidential and authenticated channels, which demands the deployment of robust Public Key Infrastructure and the adoption of IPSec to protect the control pane in the DT.
  • Threats to data transmission: As communication flows from hardware devices to the computerized structures supporting the Artificial Intelligence models discussed in the previous section, information is transmitted through channels, whether wired or wireless. This involves a classic communication protocol, and hence the transmitted data may be susceptible to typical attack typologies. We would like to emphasize the importance of considering that, as this technology aims to become predominant in the industrial, healthcare [59], and autonomous driving [60] sectors, when it comes to its implementation and security, one must also take into account potential threats arising from the advent of quantum computers.
    • Data Tampering: Without a mechanism to ensure the authenticity of exchanged information, a DT-based system is vulnerable to forgery, alteration, replacement, or deletion of data. This type of threat may manifest during the DT creation phase, by contaminating the training dataset with manipulated data, thereby disrupting the DT’s functionality from its inception.
    • Desynchronization of DT: A distinctive feature of DT-based technology is its bidirectional communication, which, in order to function correctly, must remain synchronized. In response to this requirement, an attacker could compromise the system by tampering with the synchronization associated with twin interactions; see Gehrmann et al. [61]. This threat broadens the attack vector, as it is possible to disrupt either twin (digital or physical) by simply altering the synchrony of its corresponding twin.
    • Eavesdropping Attack: Although not specific to DT-based technology, this type of threat applies to any communication utilizing open or improperly secured channels, potentially gaining access to transmitted data.
    • Message Flooding Attack: This type of attack can impact communications, both inter/intra-twin, and may trigger a DoS attack.
    • Interest Flooding Attack: Such an attack affects not only the availability of the digital resource but can also interfere with the operation of the DT by executing queries that raise CPU usage or increase memory utilization, among others problems [62].
    • Man-in-the-Middle (MITM): This is a typical threat to the network infrastructures used by the DT, especially when communication is effectuated through wireless networks, where malicious servers can be located in the middle of the DT information flow for launching an MitM attack. An insider can launch a routing attack on the devices under their control and modify the information sent, trace the network traffic sequence or deteriorate the quality of the DT maintenance process by altering the DT database or the final representation of the data.
    • Denial of Service (DoS): DoS attacks mainly affect the data acquisition layer devices and are aimed to facilitate another series of attacks, although the physical resources (servers) of the virtualized twin may also be affected. However, in cases where the organization deploys a decentralized infrastructure, the damage is minimized. This is the most difficult type of attack to carry out as DTs are usually set up in closed environments and require the attacker to be close, physically or digitally, to the devices or servers to be compromised.
    The protection against interception attacks involves the use of encryption, integrity verification, and the deployment of adequate security controls for authentication, accountability and audit. Taking into account the heterogeneous nature of the DT ecosystem, there is a need to promote the adoption of standards and interoperable technical solutions. Again, the NIST Cybersecurity Framework provides a set of very useful guidelines for the promotion of those standards in DTs [63].
  • Threats to data usage/consumption
    • Private Information Extraction With Insiders: This type of threat occurs in both types of communications within a DT ecosystem. An insider may leverage their position to extract sensitive information shared with the DT from legitimate physical twins. Using this information could enable access to the DT within the system, facilitating attacks within the infrastructure itself.
    • Privacy Leakage in Model Aggregation: Under the collaborative learning paradigm, there is a risk of information leakage during the model aggregation phase applied to the DT ecosystem [64].
    • Privacy Leakage in Model Delivery/Deployment: When offering global AI models from cloud infrastructures, there is a potential risk of model theft during communications between different DTs. If the AI model is obtained, it may be possible to infer information contained in the model parameters. Additionally, with the model in hand, it is possible to implement techniques that allow unauthorized access, enabling the alteration of the model’s output at the attacker’s discretion [65].
    • Knowledge/Model Inversion Attack: During the usage phase of a DT, there is a risk of extracting the representations of training data from the AI model, known as knowledge/model inversion attacks. This type of information extraction can be classified into two types, the first of which is known as a white-box attack, in which direct access is gained to the AI model along with all associated information, and black-box attacks, in which interaction with the AI model occurs through APIs to obtain related information [66].

3. A New Framework for Identifying Digital Twin Threats

After having correctly identified all the possible threats to the digital twin paradigm, it is necessary to consider a framework for prioritizing those that can more greatly affect the company that is implementing the system.
Several risk management frameworks have been proposed by the information security community. Among frameworks with a more general approach is the Information Security Risk Analysis Method (ISRAM) methodology [67], a quantitative methodology to assess and manage information security risks in a consistent and efficient way. Microsoft’s STRIDE [68] method is, in contrast, a qualitative methodology for identifying and analysing security threats to software systems, complementing ISRAM with a more in-depth approach at the threat analysis stage, defining six different types of threats (spoofing, tampering, repudiation information disclosure, denial of services and elevation of privileges). However, these methodologies often tend to be too open-ended. They often leave it up to the user to answer ambiguous questions to determine the degree of risk. More complex frameworks have also been proposed using probabilistic relational models [69] or weight matrix systems [70,71].
Ganin et al. [72] define a framework in which the several risks for cybersecurity assessment and management is divided into threats, vulnerabilities and consequences, attending to the ease of attack, potential outcomes for the attackers and the attacked domain using a Multi-Criteria Decision Analysis (MCDA) [73]. However, this taxonomy might lack the affected parts of each of the risks analysed. In addition, it pretends to create a general framework that may lack specificity when applied to digital twins.
Risk management frameworks are not novel to new computation paradigms, more specifically, to Artificial Intelligence. The already mentioned proposed European Artificial Intelligence Act also states a series of questions related to risks that must be asked prior to the implementation of any AI system. In its Article 9, it encourages developers to stop in cases that, after identifying foreseeable misuses and risks and adopting measures of risk management, residual risks are considered not acceptable [74].
Also working in the AI domain, Camacho et al. [75] apply a Cybersecurity Risk Analysis Framework for Systems with AI Components. Extending previous work [76], they design a comprehensive framework for classifying risks, declassifying them as measurable and non-measurable, and taking into account the organisation, or even the people or the environment, affected if an attack is employed against any of the vulnerabilities. This framework actually covers the impacts identified in the Artificial Intelligence Risk Management Framework [77] elaborated by NIST and is inspired by Information Risk Assessment Methodology [78] designed by the Information Security Forum.
This framework has already been successfully used in this work for analysing the risks of AI components in ADS architecture, proving its usability for new computation paradigms. Thus, we have decided to extend and apply it to analyse the importance of each threat to digital twins and to use the taxonomy for Cybersecurity Objectives (CSO) including those with a direct monetary impact and those not directly measurable in such terms.
Taking into account this taxonomy of threats, we have decided to extend this methodology to the digital twin threats that have been identified in Section 2.3, dividing them into hardware threats, Artificial Intelligence threats and threats to the data life-cycle. The result is presented in Table 2.
However, this table has been designed taking into account the entire set of possibilities for applying a digital twin. For each use case, an exhaustive study must be performed modifying the table according to the particularities of the supply chain in which the DT is involved. For example, on a digital twin involved in an e-health diagnostic system [79,80], an eavesdropper can access personal medical records, violating the privacy of the human it belongs to; thus, an injury to personal rights is happening. On the other hand, for a digital twin involved in a supply chain at a factory in which only mechanical decisions are taken, an eavesdropper cannot violate personal rights as no information of this kind is being treated.

4. Conclusions

The evaluation of functionality and safety in complex systems cannot be carried out in a direct way upon the actual and physical system. It is required to conduct virtualization and emulation of the main properties and processes of the physical environment under scrutiny. Digital twins are intended to characterize as best as possible those properties and processes through observability and controllability approaches, which can only be conducted when it is possible to extract data from physical systems and virtually replicate it with high levels of accuracy and reliability. In addition, precise and robust Artificial Intelligence models are demanded to properly replicate the evolution of physical properties and the behaviour of physical processes. In this paper we have highlighted the security and safety threats that could hamper the replication capabilities of digital twins. Our analysis has been focused on drifting and pollution problems affecting sensors as main pathways to gather the information necessary to deploy a digital twin. The dependency matrix of hardware production and the existence of potential backdoors and Trojans at the hardware level have been also considered. The vulnerability surface associated with network communications was discussed in terms of the risks from data interception and modification. These risks must be overcome to advance adequate preparedness in case of the existence of actual and practical quantum threats. Future work will be devoted to designing agile methodologies to plan the transition from current cryptographic primitives and protocols to post-quantum software and hardware solutions as demanded by the European Commission [81], and according to the guidelines and standards proposed by NIST [82]. The principle “trust and verify” must be enforced along the complete data life-cycle, including robust Remote Attestation and the thorough evaluation of computation outsourcing paradigms [46]. Risk analysis must evolve to better address the complexities and perils of the interleaved domain of cybersecurity and Artificial Intelligence.

Funding

This research was supported by the National Spanish funding CDTI MIG-20221061, for the project “Quantum cognitive digital industry: a hyperautomated, accessible and cyber-secure (quantum resistant) digital twin based on extreme data mining”, and partially by Comunidad Autonoma de Madrid, RAMONES-CM (TEC-2024/COM-504) and CIRMA-CM (TEC-2024/COM-404) Projects, the Safehorizon Project, with Grant Agreement No. 101168562 under the EU Horizon Europe research and innovation programme and the QUBIP Project with Grant Agreement No. 101119746 under the EU Horizon Europe.

Conflicts of Interest

The authors declare no conflicts of interest. The content of this article does not reflect the official opinion of the European Union. Responsibility for the information and views expressed therein lies entirely with the authors.

References

  1. Chinesta, F.; Cueto, E.; Abisset-Chavanne, E.; Duval, J.L.; Khaldi, F.E. Virtual, digital and hybrid twins: A new paradigm in data-based engineering and engineered data. Arch. Comput. Methods Eng. 2018, 27, 105–134. [Google Scholar] [CrossRef]
  2. Jafari, M.; Kavousi-Fard, A.; Chen, T.; Karimi, M. A review on digital twin technology in smart grid, transportation system and smart city: Challenges and future. IEEE Access 2023, 11, 17471–17484. [Google Scholar] [CrossRef]
  3. Minerva, R.; Lee, G.M.; Crespi, N. Digital twin in the iot context: A survey on technical features, scenarios, and architectural models. Proc. IEEE 2020, 108, 1785–1824. [Google Scholar] [CrossRef]
  4. Alcaraz, C.; Lopez, J. Digital twin: A comprehensive survey of security threats. IEEE Commun. Surv. Tutor. 2022, 24, 1475–1503. [Google Scholar] [CrossRef]
  5. ISO TC 184/SC4/WG15: ISO 23247 standard. Available online: https://www.ap238.org/iso23247/ (accessed on 25 September 2025).
  6. Dihan, M.; Akash, A.; Tasneem, Z.; Das, P.; Das, S.; Islam, M.; Islam, M.; Badal, F.; Ali, M.; Ahamed, M.; et al. Digital twin: Data exploration, architecture, implementation and future. Heliyon 2024, 10, e26503. [Google Scholar] [CrossRef] [PubMed]
  7. Pan, Y.; Zhang, L. Roles of artificial intelligence in construction engineering and management: A critical review and future trends. Autom. Constr. 2021, 122, 103517. [Google Scholar] [CrossRef]
  8. Fuller, A.; Fan, Z.; Day, C.; Barlow, C. Digital twin: Enabling technologies, challenges and open research. IEEE Access 2020, 8, 108952–108971. [Google Scholar] [CrossRef]
  9. Tao, F.; Qi, Q.; Wang, L.; Nee, A. Digital twins and cyber–physical systems toward smart manufacturing and industry 4.0: Correlation and comparison. Engineering 2019, 5, 653–661. [Google Scholar] [CrossRef]
  10. Tao, F.; Zhang, H.; Liu, A.; Nee, A.Y.C. Digital twin in industry: State-of-the-art. IEEE Trans. Ind. Inform. 2019, 15, 2405–2415. [Google Scholar] [CrossRef]
  11. Council of European Union. Council Regulation (EU). 2021. Available online: https://www.euaiact.com/ (accessed on 25 September 2025).
  12. Madni, A.M.; Madni, C.C.; Lucero, S.D. Leveraging digital twin technology in model-based systems engineering. Systems 2019, 7, 7. [Google Scholar] [CrossRef]
  13. Pan, X.; Lin, Q.; Ye, S.; Li, L.; Guo, L.; Harmon, B. Deep learning based approaches from semantic point clouds to semantic BIM models for heritage digital twin. Herit. Sci. 2024, 12, 65. [Google Scholar] [CrossRef]
  14. Zhao, J.; Huang, J.; Zhi, D.; Yan, W.; Ma, X.; Yang, X.; Li, X.; Ke, Q.; Jiang, T.; Calhoun, V.D.; et al. Functional network connectivity (fnc)-based generative adversarial network (gan) and its applications in classification of mental disorders. J. Neurosci. Methods 2020, 341, 108756. [Google Scholar] [CrossRef]
  15. Bo, Y.; Wu, H.; Che, W.; Zhang, Z.; Li, X.; Myagkov, L. Methodology and application of digital twin-driven diesel engine fault diagnosis and virtual fault model acquisition. Eng. Appl. Artif. Intell. 2024, 131, 107853. [Google Scholar] [CrossRef]
  16. Thao, L.Q.; Kien, D.T.; Thien, N.D.; Bach, N.C.; Hiep, V.V.; Khanh, D.G. Utilizing AI and silver nanoparticles for the detection and treatment monitoring of canker in pomelo trees. Sens. Actuators A Phys. 2024, 368, 115127. [Google Scholar] [CrossRef]
  17. Jiang, W.; Han, B.; Habibi, M.A.; Schotten, H.D. The road towards 6g: A comprehensive survey. IEEE Open J. Commun. Soc. 2021, 2, 334–366. [Google Scholar] [CrossRef]
  18. Emmert-Streib, F. What is the role of ai for digital twins? AI 2023, 4, 721–728. [Google Scholar] [CrossRef]
  19. Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; Wiley: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
  20. Bruzza, M.; Cabrera, A.; Tupia, M. Survey of the state of art based on PICOC about the use of artificial intelligence tools and expert systems to manage and generate tourist packages. In Proceedings of the 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS), Dubai, United Arab Emirates, 18–20 December 2017; pp. 290–296. [Google Scholar] [CrossRef]
  21. Boteju, M.; Ranbaduge, T.; Vatsalan, D.; Arachchilage, N.A.G. Sok: Demystifying privacy enhancing technologies through the lens of software developers. arXiv 2023, arXiv:2401.00879. [Google Scholar] [CrossRef]
  22. Priem, J.; Piwowar, H. The Unpaywall Dataset. 2018. Available online: https://figshare.com/articles/The_Unpaywall_Dataset/6020078 (accessed on 20 September 2025). [CrossRef]
  23. NIST. The NIST Cybersecurity Framework (CSF) 2.0; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2024. [CrossRef]
  24. Sharma, A.; Kosasih, E.; Zhang, J.; Brintrup, A.; Calinescu, A. Digital twins: State of the art theory and practice, challenges, and open research questions. arXiv 2020, arXiv:2011.02833. [Google Scholar] [CrossRef]
  25. Al-Kuwaiti, M.; Kyriakopoulos, N.; Hussein, S. Network dependability, fault-tolerance, reliability, security, survivability: A framework for comparative analysis. In Proceedings of the 2006 International Conference on Computer Engineering and Systems, Cairo, Egypt, 5–7 November 2006. [Google Scholar] [CrossRef]
  26. Durão, L.F.C.S.; Haag, S.; Anderl, R.; Schützer, K.; Zancul, E. Digital Twin Requirements in the Context of Industry 4.0. In Product Lifecycle Management to Support Industry 4.0. PLM 2018; Springer International Publishing: Cham, Switzerland, 2018; pp. 204–214. [Google Scholar] [CrossRef]
  27. Moyne, J.; Qamsane, Y.; Balta, E.C.; Kovalenko, I.; Faris, J.; Barton, K.; Tilbury, D.M. A requirements driven digital twin framework: Specification and opportunities. IEEE Access 2020, 8, 107781–107801. [Google Scholar] [CrossRef]
  28. Neshenko, N.; Bou-Harb, E.; Crichigno, J.; Kaddoum, G.; Ghani, N. Demystifying iot security: An exhaustive survey on iot vulnerabilities and a first empirical look on internet-scale iot exploitations. IEEE Commun. Surv. Tutor. 2019, 21, 2702–2733. [Google Scholar] [CrossRef]
  29. Mullet, V.; Sondi, P.; Ramat, E. A review of cybersecurity guidelines for manufacturing factories in industry 4.0. IEEE Access 2021, 9, 23235–23263. [Google Scholar] [CrossRef]
  30. Liu, H.; Tu, J.; Liu, J.; Zhao, Z.; Zhou, R. Generative adversarial scheme based GNSS spoofing detection for digital twin vehicular networks. In Wireless Algorithms, Systems, and Applications; Springer International Publishing: Cham, Switzerland, 2021; pp. 367–374. [Google Scholar] [CrossRef]
  31. Garg, H.; Sharma, B.; Shekhar, S.; Agarwal, R. Spoofing detection system for e-health digital twin using EfficientNet convolution neural network. Multimed. Tools Appl. 2022, 81, 26873–26888. [Google Scholar] [CrossRef]
  32. Mastorakis, S.; Zhong, X.; Huang, P.-C.; Tourani, R. Dlwiot: Deep learning-based watermarking for authorized iot onboarding. In Proceedings of the 2021 IEEE 18th Annual Consumer Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 9–12 January 2021; pp. 1–7. [Google Scholar] [CrossRef]
  33. Gupta, H.; van Oorschot, P.C. Onboarding and software update architecture for iot devices. In Proceedings of the 2019 17th International Conference on Privacy, Security and Trust (PST), Fredericton, NB, Canada, 26–28 August 2019; pp. 1–11. [Google Scholar] [CrossRef]
  34. Asokan, N.; Brasser, F.; Ibrahim, A.; Sadeghi, A.-R.; Schunter, M.; Tsudik, G.; Wachsmann, C. Seda: Scalable embedded device attestation. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS ’15), Denver, CO, USA, 12–16 October 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 964–975. [Google Scholar] [CrossRef]
  35. Carpent, X.; Rattanavipanon, N.; Tsudik, G. Erasmus: Efficient remote attestation via self-measurement for unattended settings. arxiv 2017, arXiv:1707.09043. [Google Scholar]
  36. Abera, T.; Asokan, N.; Davi, L.; Koushanfar, F.; Paverd, A.; Sadeghi, A.-R.; Tsudik, G. Invited: Things, trouble, trust: On building trust in iot systems. In Proceedings of the 2016 53nd ACM/EDAC/IEEE Design Automation Conference (DAC), Austin, TX, USA, 5–9 June 2016; pp. 1–6. [Google Scholar] [CrossRef]
  37. Ambrosin, M.; Conti, M.; Lazzeretti, R.; Rabbani, M.M.; Ranise, S. Collective remote attestation at the internet of things scale: State-of-the-art and future challenges. IEEE Commun. Surv. Tutor. 2020, 22, 2447–2461. [Google Scholar] [CrossRef]
  38. Ibrahim, A.; Sadeghi, A.-R.; Zeitouni, S. Seed: Secure non-interactive attestation for embedded devices. In Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec ’17), Boston, MA, USA, 18–20 July 2017; Association for Computing Machinery: New York, NY, USA, 2017; pp. 64–74. [Google Scholar] [CrossRef]
  39. Petzi, L.; Yahya, A.E.B.; Dmitrienko, A.; Tsudik, G.; Prantl, T.; Kounev, S. SCRAPS: Scalable collective remote attestation for Pub-Sub IoT networks with untrusted proxy verifier. In Proceedings of the 31st USENIX Security Symposium (USENIX Security 22), Boston, MA, USA, 10–12 August 2022; USENIX Association: Boston, MA, USA, 2022; pp. 3485–3501. [Google Scholar]
  40. Jenkins, I.R.; Smith, S.W. Distributed iot attestation via blockchain. In Proceedings of the 2020 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing (CCGRID), Melbourne, VIC, Australia, 1–14 May 2020; pp. 798–801. [Google Scholar] [CrossRef]
  41. Kohnhäuser, F.; Büscher, N.; Katzenbeisser, S. Salad: Secure and lightweight attestation of highly dynamic and disruptive networks. In Proceedings of the 2018 on Asia Conference on Computer and Communications Security (ASIACCS ’18), Incheon, Republic of Korea, 4 June 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 329–342. [Google Scholar] [CrossRef]
  42. Masure, L.; Dumas, C.; Prouff, E. A comprehensive study of deep learning for side-channel analysis. IACR Trans. Cryptogr. Hardw. Embed. Syst. 2020, 1, 348–375. [Google Scholar] [CrossRef]
  43. Shwartz, O.; Mathov, Y.; Bohadana, M.; Elovici, Y.; Oren, Y. Reverse engineering iot devices: Effective techniques and methods. IEEE Internet Things J. 2018, 5, 4965–4976. [Google Scholar] [CrossRef]
  44. Lo, C.; Chen, C.; Zhong, R.Y. A review of digital twin in product design and development. Adv. Eng. Inform. 2021, 48, 101297. [Google Scholar] [CrossRef]
  45. Clements, J.; Lao, Y. Hardware trojan attacks on neural networks. arXiv 2018, arXiv:1806.05768. [Google Scholar] [CrossRef]
  46. Goldwasser, S.; Kim, M.P.; Vaikuntanathan, V.; Zamir, O. Planting undetectable backdoors in machine learning models. arXiv 2022, arXiv:2204.06974. [Google Scholar] [CrossRef]
  47. Wu, B.; Chen, H.; Zhang, M.; Zhu, Z.; Wei, S.; Yuan, D.; Shen, C. Backdoorbench: A comprehensive benchmark of backdoor learning. arXiv 2022, arXiv:2206.12654. [Google Scholar] [CrossRef]
  48. Gu, T.; Dolan-Gavitt, B.; Garg, S. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv 2019, arXiv:1708.06733. [Google Scholar] [CrossRef]
  49. Li, Y.; Li, Y.; Wu, B.; Li, L.; He, R.; Lyu, S. Invisible backdoor attack with sample-specific triggers. arXiv 2021, arXiv:2012.03816. [Google Scholar] [CrossRef]
  50. Lin, L.; Bao, H.; Dinh, N. Uncertainty quantification and software risk analysis for digital twins in the nearly autonomous management and control systems: A review. Ann. Nucl. Energy 2021, 160, 108362. [Google Scholar] [CrossRef]
  51. Roy, C.J.; Oberkampf, W.L. A comprehensive framework for verification, validation, and uncertainty quantification in scientific computing. Comput. Methods Appl. Mech. Eng. 2011, 200, 2131–2144. [Google Scholar] [CrossRef]
  52. Jamil, S.; Rahman, M.; Fawad. A comprehensive survey of digital twins and federated learning for industrial internet of things (iiot), internet of vehicles (iov) and internet of drones (iod). Appl. Syst. Innov. 2022, 5, 56. [Google Scholar] [CrossRef]
  53. Hard, A.; Rao, K.; Mathews, R.; Ramaswamy, S.; Beaufays, F.; Augenstein, S.; Eichner, H.; Kiddon, C.; Ramage, D. Federated learning for mobile keyboard prediction. arXiv 2019, arXiv:1811.03604. [Google Scholar] [CrossRef]
  54. Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How to backdoor federated learning. arXiv 2019, arXiv:1807.00459. [Google Scholar] [PubMed]
  55. Pasquini, D.; Francati, D.; Ateniese, G. Eluding Secure Aggregation in Federated Learning via Model Inconsistency. arXiv 2022, arXiv:2111.07380. [Google Scholar] [CrossRef]
  56. Tian, Z.; Cui, L.; Liang, J.; Yu, S. A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning. ACM Comput. Surv. 2022, 55, 166. [Google Scholar] [CrossRef]
  57. Su, Z.; Xu, Q.; Luo, J.; Pu, H.; Peng, Y.; Lu, R. A Secure Content Caching Scheme for Disaster Backup in Fog Computing Enabled Mobile Social Networks. IEEE Trans. Ind. Inform. 2018, 14, 4579–4589. [Google Scholar] [CrossRef]
  58. Nour, B.; Mastorakis, S.; Ullah, R.; Stergiou, N. Information-Centric Networking in Wireless Environments: Security Risks and Challenges. IEEE Wirel. Commun. 2021, 28, 121–127. [Google Scholar] [CrossRef]
  59. Ricci, A.; Croatti, A.; Mariani, S.; Montagna, S.; Picone, M. Web of digital twins. ACM Trans. Internet Technol. 2022, 22, 1–30. [Google Scholar] [CrossRef]
  60. Veledar, O.; Damjanovic-Behrendt, V.; Macher, G. Digital twins for dependability improvement of autonomous driving. In Systems, Software and Services Process Improvement. EuroSPI 2019; European Conference on Software Process Improvement; Springer: Cham, Switzerland, 2019; pp. 415–426. [Google Scholar]
  61. Gehrmann, C.; Gunnarsson, M. A Digital Twin Based Industrial Automation and Control System Security Architecture. IEEE Trans. Ind. Inform. 2020, 16, 669–680. [Google Scholar] [CrossRef]
  62. Tourani, R.; Misra, S.; Mick, T.; Panwar, G. Security, Privacy, and Access Control in Information-Centric Networking: A Survey. IEEE Commun. Surv. Tutor. 2018, 20, 566–600. [Google Scholar] [CrossRef]
  63. Alcaraz, C.; Lopez, J. Digital Twin Security: A Perspective on Efforts From Standardization Bodies. IEEE Secur. Priv. 2025, 23, 83–90. [Google Scholar] [CrossRef]
  64. Wang, Y.; Peng, H.; Su, Z.; Luan, T.H.; Benslimane, A.; Wu, Y. A Platform-Free Proof of Federated Learning Consensus Mechanism for Sustainable Blockchains. IEEE J. Sel. Areas Commun. 2022, 40, 3305–3324. [Google Scholar] [CrossRef]
  65. Xue, M.; Zhang, Y.; Wang, J.; Liu, W. Intellectual Property Protection for Deep Learning Models: Taxonomy, Methods, Attacks, and Evaluations. IEEE Trans. Artif. Intell. 2022, 3, 908–923. [Google Scholar] [CrossRef]
  66. Tramèr, F.; Zhang, F.; Juels, A.; Reiter, M.K.; Ristenpart, T. Stealing Machine Learning Models via Prediction APIs. arXiv 2016, arXiv:1609.02943. [Google Scholar] [CrossRef]
  67. Karabacak, B.; Sogukpinar, I. Isram: Information security risk analysis method. Comput. Secur. 2005, 24, 147–159. [Google Scholar] [CrossRef]
  68. Microsoft. The Stride Threat Model. 2009. Available online: https://learn.microsoft.com/en-us/previous-versions/commerce-server/ee823878(v=cs.20) (accessed on 20 September 2025).
  69. Sommestad, T.; Ekstedt, M.; Johnson, P. A probabilistic relational model for security risk analysis. Comput. Secur. 2010, 29, 659–679. [Google Scholar] [CrossRef]
  70. Henry, M.H.; Haimes, Y.Y. A comprehensive network security risk model for process control networks. Risk Anal. 2009, 29, 223–248. [Google Scholar] [CrossRef]
  71. Fovino, I.N.; Guidi, L.; Masera, M.; Stefanini, A. Cyber security assessment of a power plant. Electr. Power Syst. Res. 2011, 81, 518–526. [Google Scholar] [CrossRef]
  72. Ganin, A.A.; Quach, P.; Panwar, M.; Collier, Z.A.; Keisler, J.M.; Marchese, D.; Linkov, I. Multicriteria decision framework for cybersecurity risk assessment and management. Risk Anal. 2017, 40, 183–199. [Google Scholar] [CrossRef]
  73. Linkov, I.; Moberg, E. Multi-Criteria Decision Analysis; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar] [CrossRef]
  74. Schuett, J. Risk management in the artificial intelligence act. Eur. J. Risk Regul. 2024, 15, 367–385. [Google Scholar] [CrossRef]
  75. Camacho, J.M.; Couce-Vieira, A.; Arroyo, D.; Insua, D.R. A cybersecurity risk analysis framework for systems with artificial intelligence components. Int. Trans. Oper. Res. 2025, 33, 798–825. [Google Scholar] [CrossRef]
  76. Couce-Vieira, A.; Insua, D.R.; Kosgodagan, A. Assessing and forecasting cybersecurity impacts. Decis. Anal. 2020, 17, 356–374. [Google Scholar] [CrossRef]
  77. National Institute of Standards and Technology. Artificial Intelligence Risk Management Framework (AI RMF 1.0). 2023. Available online: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (accessed on 20 September 2025). [CrossRef]
  78. Information Security Forum. Information Risk Assessment Methodology 2 (IRAM2). 2016. Available online: https://www.securityforum.org/solutions-and-insights/information-risk-assessment-methodology-2-iram2/ (accessed on 14 March 2024).
  79. Karmakar, K.K.; Varadharajan, V.; Tupakula, U. Policy-Driven Security Architecture for Internet of Things (IoT) Infrastructure; CRC Press: Boca Raton, FL, USA, 2023; pp. 76–120. [Google Scholar] [CrossRef]
  80. Karmakar, K.K.; Varadharajan, V.; Speirs, P.; Hitchens, M.; Robertson, A. Sdpm: A secure smart device provisioning and monitoring service architecture for smart network infrastructure. IEEE Internet Things J. 2022, 9, 25037–25051. [Google Scholar] [CrossRef]
  81. European Commission. Recommendation on a Coordinated Implementation Roadmap for the Transition to Post-Quantum Cryptography. 2024. Available online: https://digital-strategy.ec.europa.eu/en/library/recommendation-coordinated-implementation-roadmap-transition-post-quantum-cryptography (accessed on 20 September 2025).
  82. NIST. Post-Quantum Cryptography Standardization. 2024. Available online: https://csrc.nist.gov/projects/post-quantum-cryptography (accessed on 20 September 2025).
Figure 1. Digital twin structure with each layer and component previously identified.
Figure 1. Digital twin structure with each layer and component previously identified.
Bdcc 09 00252 g001
Figure 2. Example string of the query used for finding scientific publications on Scopus database involving the existing specific paradigm for malware in digital twins.
Figure 2. Example string of the query used for finding scientific publications on Scopus database involving the existing specific paradigm for malware in digital twins.
Bdcc 09 00252 g002
Figure 3. Main scheme of the study selection criteria used to analyse the state of the art of the proposed research queries.
Figure 3. Main scheme of the study selection criteria used to analyse the state of the art of the proposed research queries.
Bdcc 09 00252 g003
Table 1. PICOC analysis. We consider that the Comparison item does not apply to our investigation.
Table 1. PICOC analysis. We consider that the Comparison item does not apply to our investigation.
PICOC ItemQuestionAnswer
PopulationWhat are the new paradigms of Industry 4.0. we are interested in?Digital Twin
InterventionWhich area related to digital twins would we like to study in our review?The last techniques to implement the digital twin paradigm and its potential risks
OutcomesWhat will be an enrichment outcome of this study for the community?A framework for identifying and classifying threats to digital twins
ContextWhat is the current context of digital twins in the digital landscape?Implementation of new computational paradigms such as Artificial Intelligence or quantum computing techniques.
Table 2. Threats to digital twin paradigm classified according to the risk taxonomy defined in [75]. Underlined columns refer to those threats that have effects on other organisations.
Table 2. Threats to digital twin paradigm classified according to the risk taxonomy defined in [75]. Underlined columns refer to those threats that have effects on other organisations.
Operational
Costs
Cybersecurity
Costs
Other CostsIncome
Reduction
Reputation
Impact
Operational
Costs
Other CostsIncome
Reduction
Reputation
Impact
FatalitiesInjuries to Physical
and Mental Health
Injuries to
Personal Rights
Personal Economic
Damage
Hardware Spoofingxxx x xx
Side-channel attack x
Reverse Engineering x
Hardware Trojan attackxx xxxxxx
MLaaS by untrustful companyx xxxx xxx
Uncertainty Quantificationx x xx x x
Malicious participant in Federated Learning systemxx xx x x
Low-Quality Datax xxx
Model Inconsistencyxx xxx xxx
Model Poisoningxx xx x
Threats to Data Backup x x x
Data/Content Poisoning Attackxx xxx
Data Tampering x xx
Desynchronizationxx x
Eavesdropping xxxxxxxx xx
Message Floodingxx
Interest Floodingxxx x xx
Man-in-the-Middlexxxxxxxxxxxxx
Denial of Servicexx xx
Private Information Extraction With Insiders xxx xxx xx
Privacy Leakage in Model Aggregation xxx xxx xx
Privacy Leakage in Model Delivery/Deploymentxxxx x
Knowledge/Model Inversionxxxx xx
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Suárez-Román, M.; Sanz-Rodrigo, M.; Marín-López, A.; Arroyo, D. A Digital Twin Threat Survey. Big Data Cogn. Comput. 2025, 9, 252. https://doi.org/10.3390/bdcc9100252

AMA Style

Suárez-Román M, Sanz-Rodrigo M, Marín-López A, Arroyo D. A Digital Twin Threat Survey. Big Data and Cognitive Computing. 2025; 9(10):252. https://doi.org/10.3390/bdcc9100252

Chicago/Turabian Style

Suárez-Román, Manuel, Mario Sanz-Rodrigo, Andrés Marín-López, and David Arroyo. 2025. "A Digital Twin Threat Survey" Big Data and Cognitive Computing 9, no. 10: 252. https://doi.org/10.3390/bdcc9100252

APA Style

Suárez-Román, M., Sanz-Rodrigo, M., Marín-López, A., & Arroyo, D. (2025). A Digital Twin Threat Survey. Big Data and Cognitive Computing, 9(10), 252. https://doi.org/10.3390/bdcc9100252

Article Metrics

Back to TopTop