A New Concept of Digital Twin Supporting Optimization and Resilience of Factories of the Future

Featured Application: This work was elaborated in the frame of a collaborative innovation project named CyberFactory#1 which aims at enhancing optimization and resilience of Factories of the Future. Its outcomes will be tested and demonstrated in application to industrial use-cases provided by eight pilot factories across Europe in sectors such as aerospace system manufacturing, industrial machine fabrication, consumer electronics and textile industry. One of these use-cases is provided as an illustration of the approach in the ﬁeld of Aerospace System Manufacturing. Abstract: In the context of Industry 4.0, a growing use is being made of simulation-based decision-support tools commonly named Digital Twins. Digital Twins are replicas of the physical manufacturing assets, providing means for the monitoring and control of individual assets. Although extensive research on Digital Twins and their applications has been carried out, the majority of existing approaches are asset speciﬁc. Little consideration is made of human factors and interdependencies between di ﬀ erent production assets are commonly ignored. In this paper, we address those limitations and propose innovations for cognitive modeling and co-simulation which may unleash novel uses of Digital Twins in Factories of the Future. We introduce a holistic Digital Twin approach, in which the factory is not represented by a set of separated Digital Twins but by a comprehensive modeling and simulation capacity embracing the full manufacturing process including external network dependencies. Furthermore, we introduce novel approaches for integrating models of human behavior and capacities for security testing with Digital Twins and show how the holistic Digital Twin can enable new services for the optimization and resilience of Factories of the Future. To illustrate this approach, we introduce a speciﬁc use-case implemented in ﬁeld of Aerospace System Manufacturing. spotlighted A.B.; investigation, E.M.; resources, E.M. and L.F.; writing—original draft preparation, E.M, L.F. and P.B.; writing—review and A.B. and I.P.; visualization, P.B.; supervision, I.P.; project administration, A.B.; and A.B.


Introduction
The recent cyber-attacks on Renault [1], Saint-Gobain [2] Rosnef and Merck [3], among others, have spotlighted cyber-security-related threats towards industry, as well as their unexpected financial and business impacts. In May 2017, one day of production shutdown in Renault factories cost the group several million euros [1]. Unlike risks affecting regular Information Technology (IT) systems, attacks targeting Operational Technology (OT), which supports industrial processes, can cause physical damages and casualties [4]. When it comes to the manufacturing sector, business impacts add up 2 of 32 to safety risks and confidentiality loss [4]. The damage can only grow bigger with ultra-digitized plants as envisaged in the vision of Industry 4.0, a concept originally defined by an eponym German governmental project as: "fostering strong customization of products under the conditions of highly flexible production, introduction of methods of self-optimization, self-configuration, self-diagnosis, cognition and intelligent support of workers in their increasingly complex work" [5]. The term as accepted today embraces more broadly the technological, organizational, economical and societal changes driven by enhanced digitization of manufacturing industry.
Among other attributes, the Factory of the Future (FoF) as envisioned by Industry 4.0 movement strongly relies on the use of virtual models of cyber-physical assets commonly named Digital Twins (DTs). DTs have been proposed to support manufacturing asset and product life-cycle management as well as several use-cases, which can be classified in monitoring, optimization and control usage types. In this paper, we intend to examine the potential for involvement of DTs beyond the limits of their state-of-the-art functionalities. In particular, we address two limitations of existing technology which relate with: (i) the lack of applicable techniques to model human behavior and their interaction with machines; and (ii) the closed-system nature of existing DTs which limits their ability to provide holistic process optimization paths. With consideration for the above-mentioned cybersecurity challenges, we also examine the potential for DTs contribution to cyber-resilience objectives and propose a path for convergence with simulation tools currently in use in security business, called Cyber-Ranges (CRs). By addressing the above-stated limitations of DTs and combining them with CR technology, we solve several implementation and security problems faced by industry players in the management of their digital transformation. Section 2 of the paper provides elements of context and a presentation of the challenges to be addressed within the frame of a collaborative research project named CyberFactory#1, which this research initiative belongs to. Section 3 provides the state of the art in matters of DTs and CRs, with consideration for the above-identified limitations. Section 4 describes the design of developments to be carried out in the frame of the project to solve these limitations and extend the uses of DTs and CRs in the context of FoFs. Section 5 provides a stepped demonstration of application of the proposed methodology on a particular industrial use-case. We conclude with a synthesis of the main findings, identification of areas for further research and an outlook of the way ahead for CyberFactory#1 project.

Introduction to Industry 4.0
In 2011, a German governmental project coined the term Industry 4.0 with the ambition to "drive digital manufacturing forward by increasing digitization and the interconnection of products, value chains and business models" [5]. Since then, the concept has been widely accepted as embodying the fourth industrial revolution, characterized by enhanced connectivity and autonomy of production systems.
The beginnings of the first industrial revolution happened in England around 1785, with the introduction of the first flour mills and mechanical looms [6]. During the early 19th century, the use of steam machines, first designed by Denis Papin (1647-1713), to power repetitive manufacturing tasks spread in Europe and North America. Inside factories, increased productivity is achieved by greater division of labor, first theorized by Adam Smith (1723-1790) [7]. In between companies, the transactions tended to grow international, powered by steam-powered transportation systems. The electrification of factories progressively enabled replacing single-speed central steam engines by task-specific power groups, allowing greater flexibility, reduced power loss and improved working conditions [8].
In 1867 in Chicago, the first industrial assembly lines powered the meatpacking industry by moving the meat to the workers instead of moving the workers to the meat, imposing a relentless production rate [9]. This is accepted to be the start of the second industrial revolution, characterized by enhanced electrification, elimination of waste, standardization and workflow optimization based • the economic dimension, which includes modeling of the factory ecosystem [32], manufacturing data-lake exploitation [33] and the adversarial/robust machine learning algorithms [34]; • the human dimension, which includes human behavior modeling [35], to optimizing the human/machine interactions [36] and implement mutual behavior watch between machines and humans; and • the societal dimension, arising from the previous three, includes the factory modeling as a System of Systems, the distributed manufacturing's design and, finally, the cyber-resilience mechanisms' definition.
Across these four dimensions, three layers have been defined to address, respectively: • the simulation challenge (main object of this article); • the optimization challenge (one main area of application); and • the resilience challenge (second area of application.
The simulation and modeling layer acts both as a challenge of its own and an enabler for optimization and resilience challenges. Indeed, it is proposed to use a comprehensive FoF simulation capacity to support FoF system architecture validation, as well as development, testing and validation of optimization and resilience capabilities. Figure 1 illustrates the relationships among the physical plant, the modeling and simulation capacity, the optimization capacity and the resilience capacity, where obviously the FoF DT and the manufacturing the data lake act as shared resources to support optimization and resilience capacities.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 4 of 31  the human dimension, which includes human behavior modeling [35], to optimizing the human/machine interactions [36] and implement mutual behavior watch between machines and humans; and  the societal dimension, arising from the previous three, includes the factory modeling as a System of Systems, the distributed manufacturing's design and, finally, the cyber-resilience mechanisms' definition.
Across these four dimensions, three layers have been defined to address, respectively:  the simulation challenge (main object of this article);  the optimization challenge (one main area of application); and  the resilience challenge (second area of application.
The simulation and modeling layer acts both as a challenge of its own and an enabler for optimization and resilience challenges. Indeed, it is proposed to use a comprehensive FoF simulation capacity to support FoF system architecture validation, as well as development, testing and validation of optimization and resilience capabilities. Figure 1 illustrates the relationships among the physical plant, the modeling and simulation capacity, the optimization capacity and the resilience capacity, where obviously the FoF DT and the manufacturing the data lake act as shared resources to support optimization and resilience capacities. In this paper, we address specific limitations of state-of-the-art modeling and simulation techniques and propose innovations which may unleash the potential for exploitation of DTs in support of rational decision making for FoF optimization and resilience.

FoF Modeling and Simulation Challenge
Among other technological and organizational developments, DTs have made their entry as digital replicas of the physical manufacturing assets to support remote monitoring and other valueadded functions which would otherwise require direct observation or intervention on machines [37]. Most DTs today on the market essentially replicate single equipment, enabling use-cases such as process control and optimization or predictive maintenance. They appear to be of mutual interest to machine manufacturers and uses, as they are meant to remain connected to the physical asset in operation, enabling each party to retrieve informative data on machine performance, maintenance needs and tracks for design improvement or process optimization. It makes it possible to simulate the results by changing some parameters, enabling to confirm the relevant requirements in relation to the manufacturing and production capacity during product development [38]. Moreover, this entire process can be considered and managed in real time.
A limitation, however, to the value of such digital models from the user's point of view is their inability to comprehensively inform decision not at individual machine level, but rather at SoS level, understand here SoS as the complete production ecosystem. Few attempts have been made to extend In this paper, we address specific limitations of state-of-the-art modeling and simulation techniques and propose innovations which may unleash the potential for exploitation of DTs in support of rational decision making for FoF optimization and resilience.

FoF Modeling and Simulation Challenge
Among other technological and organizational developments, DTs have made their entry as digital replicas of the physical manufacturing assets to support remote monitoring and other value-added functions which would otherwise require direct observation or intervention on machines [37]. Most DTs today on the market essentially replicate single equipment, enabling use-cases such as process control and optimization or predictive maintenance. They appear to be of mutual interest to machine manufacturers and uses, as they are meant to remain connected to the physical asset in operation, enabling each party to retrieve informative data on machine performance, maintenance needs and tracks for design improvement or process optimization. It makes it possible to simulate the results by changing some parameters, enabling to confirm the relevant requirements in relation to the manufacturing and production capacity during product development [38]. Moreover, this entire process can be considered and managed in real time.
A limitation, however, to the value of such digital models from the user's point of view is their inability to comprehensively inform decision not at individual machine level, but rather at SoS level, understand here SoS as the complete production ecosystem. Few attempts have been made to extend the range of application of DTs beyond the limits of individual physical assets [39]. In the context of Industry 4.0, this manufacturing SoS involves a substantial share of physical assets, an increasing share of information systems, but also humans. We consider here a vision of Industry 4.0 where the human is not replaced by the machine, but collaborates with it and remains ultimately accountable as the law provides it [40]. While we accept that information systems are quite extensively addressed by state-of-the-art virtualization technology, humans will remain an essential element of the FoF. Their behavior is known to be less predictable than that of machines or networks [41]. For a manufacturing professional, it is not easy to take the full benefit of DTs as they do not properly model all key elements of the production process, and, thus, they may not provide support to rational decision from a holistic point of view [42]. Therefore, we propose in this paper to address two specific gaps in state-of-the-art DT technology: • the lack of simulation capacity to properly model the behavior of humans in the FoF [43]; and • the lack of a co-simulation capacity to support holistic modeling of distinct physical, digital and human manufacturing assets involved in a common production process [42].
The two gaps are addressed in this paper with proposed innovations to fill them. As our ambition is to set up a complete manufacturing SoS modeling and simulation capacity, we make new uses of the DT for the FoF possible. First, the realistic modeling of interactions among machines, networks and humans enables going beyond mere monitoring of the production and really supporting decisions, if not also the autonomous reconfiguration of the manufacturing SoS for optimization purposes. This property can be considered for static architecture selection at factory design stage, as well as a dynamic optimization capacity in operation [44]. In the first case, it would enable radically new concepts of factories optimized for production in the digital age and freed from the architectural bias of the legacy. In the second case, it would enable continuous improvement of either legacy or new factories in operation, considering all environmental, physical, economical and psychological factors which a single human could not possibly handle in decision making [45]. This framework also enables optimum production planning in distributed manufacturing topologies such as networks of factories or fab labs. The co-simulation capacity can support the agreement of networked manufacturing plant owners on a coherent production plan that reflects best compromise according to the agreed parameters of the decision. It potentially would serve as input to smart contract mechanisms, which distributed manufacturing ecosystems will need to implement [46].
The promises for optimization are quite straightforward and very much promoted already by automation vendors in the pleading for DTs. What seems to have been much neglected, however, in both research and industry, is the potential for DTs to support enhanced resilience towards industrial hazards or deliberate threats [47]. It is important to note that the whole concept of FoF brings with it a handful of new threats, which we should not leave unaddressed. Almost any connected device, whether on the shop floor, in an automated system or remotely located at a third-party contract manufacturer, can be a source of new risks (even those that only peripherally or indirectly touch the production process).
The introduction of evermore connected manufacturing assets, the increased reliance on Artificial Intelligence (AI) for decision making, the development of practices such as collaborative manufacturing and the increasing use of Cloud Manufacturing (CMfg) services altogether widen the attack surface up to what is sometimes called an attack fractal. DTs themselves, if not properly secured, offer a new entry point for cyber-attackers to perform industrial sabotage and industrial data theft.
As any other asset, the DT must be subject to security measures [48]. More interestingly, we propose in this paper implementations of FoF Digital Twin technology as a mean for enhanced resilience, notably towards cyber-threats and other industrial hazards. The association of DTs with cybersecurity simulation and training tools, commonly called Cyber-Ranges (CR) [49], enables to comprehending the complexity of event propagation from physical asset to digital networks and vice versa, unleashing an unpreceded potential for SoS security architecture decision support, continuous process security and safety monitoring and simulation-based decision making. Considering that 60% of all cyber-attacks involve a willing or unwilling human, commonly named insider [50], the inclusion of human behavior modeling capacity substantially enhances this resilience capacity. Ultimately, the FoF DT is the mean to support optimal decision making with considerations for both process optimization and resilience objectives. The presented use case aims at integrating Industrial IoT for flexible management and optimization of cyber physical systems across Tablada, San Pablo Sur and CBC factories ( Figure 2) in Spain. An industrial IoT deployment into these AIRBUS D&S factories will be carried out in the future. After that, factory assets such as rivet shaving machine (Roboshave), industrial oven (Autoclave) and advanced smart tools control system (Gap Gun) will be integrated within this IIoT Network, enabling near real time remote process monitoring, optimization, control and resilience.

Aerospace
Appl. Sci. 2020, 10, x FOR PEER REVIEW 6 of 31 FoF DT is the mean to support optimal decision making with considerations for both process optimization and resilience objectives. The presented use case aims at integrating Industrial IoT for flexible management and optimization of cyber physical systems across Tablada, San Pablo Sur and CBC factories ( Figure 2) in Spain. An industrial IoT deployment into these AIRBUS D&S factories will be carried out in the future. After that, factory assets such as rivet shaving machine (Roboshave), industrial oven (Autoclave) and advanced smart tools control system (Gap Gun) will be integrated within this IIoT Network, enabling near real time remote process monitoring, optimization, control and resilience. This IIoT system is to be deployed over three sites and will cover three types of manufacturing assets. To support secure design, deployment and operation of this IIoT network, a complete virtual model of the system will be realized. It shall enable simulation-based architecture decision making, security analysis, testing and validation, operation and training. The use of smart wearable and environment sensors will also enable to model behavior of human operators, in particular with consideration for handling of gap gun which is a hand tool involved in quality measurement of aeronautical parts. User behavior modeling can also be performed on the IT layer, by analyzing network traffic and kernel data, as well as by using non-intrusive sensors to support User and Entity Behavior Analytics (UEBA). This IIoT system is to be deployed over three sites and will cover three types of manufacturing assets. To support secure design, deployment and operation of this IIoT network, a complete virtual model of the system will be realized. It shall enable simulation-based architecture decision making, security analysis, testing and validation, operation and training. The use of smart wearable and environment sensors will also enable to model behavior of human operators, in particular with consideration for handling of gap gun which is a hand tool involved in quality measurement of aeronautical parts. User behavior modeling can also be performed on the IT layer, by analyzing network traffic and kernel data, as well as by using non-intrusive sensors to support User and Entity Behavior Analytics (UEBA).

Digital Twin Concept and Definition
It is a tough task to define what a Digital Twin is. The fact that Wikipedia has nine definitions for digital twins is a clear indicator of this problem. To some authors, a DT is a simulation model that mirrors physical systems and allows simulations; to others it is something that can mirror the state of a real asset, allowing its monitoring, control and alteration as is common in IoT [51] The differences in the definition seem to be related to the focus that each author has in the DT concept. Negri et al [52] gave a comprehensive overview of the DT definitions available in the literature from its first appearance in 2012 to 2016. However, even in recent years the definition of DT still seems to change according to the authors and their purpose.
Martin Morháč's team [53] with their focus on industrial process lines, defined the DT concept as the functional system of a continuous process optimization, formed by the cooperation of the physical and the digital production lines. Thus, DT continuously collects and evaluates process information in order to shorten production cycles, accelerate the introduction of new products and reduce process inefficiency.
In addition, for Jiewu Leng et al [54] DTs are related to optimization, with a DT being a "simulation with ability of real-time control and optimization of products and production lines." However, Bao et al [55] saw the DT as a method or tool to be used in the simulation and modeling of the behavior and status of entities.
According to Schluse et al [56] DTs are deeply associated with the Industry 4.0 development. For the authors, a DT is a one-to-one representation of a real-world element (such as a machine, component or part of the environment) or a real subject (person, software or system). The DT comprises the virtual representation of this element, its behavior and its communication facilities. In their work, the authors presented a simulation technology, based on the concept of "Experimentable Digital Twins," which combine DTs with simulation technologies to bring DTs to life.
Graessler and Poehle [57] developed a DT that assumes the employee communication and coordination tasks with the production system. The usual concept of a DT that emulates the properties and behavior of a system was adapted by the authors to act as a representative for a human employee in a cyber-physical production system (CPPS), since the property and behavior of the human DT need to be based on user feedback and recorded patterns instead of actual measured data.
According to Rovere et al [58] the DT is the semantic, functional and simulation-ready representation of each shop floor CPS. It can define performance specifications and behavioral and functional models. Thus, it can be said, according to the authors, that the DT is a composite concept that aggregates the following elements: • CPS Prototype is a model that defines the structure and the associate semantic for a certain class of CPS. • CPS Instance is a computer-based representation of an instantiation of a CPS prototype. As the DT is an instance of a CPS prototype, it can be said that a CPS instance is a computer-based representation of its DT.

•
Behavioral Models are simulation models related with the semantic representation of a CPS prototype and instance. Each DT can address different behavioral models to allow multi-disciplinary simulations.

•
Functional Models allow the analysis of data from the shop floor. The result of the data analysis is used to enrich the DT, for example to enable predictive maintenance.
Therefore, as already referred, several definitions of DTs exist in the literature and each author adapts the definition according to their purpose. However, all the definitions have something in common: the simulation of something in a specific environment. The definition presented in the Industrial Internet Consortium White Pape [59] seems to be general and consider all the definitions presented before. The authors stated that "a DT is a formal digital representation of some asset, process or system that captures attributes and behaviors of that entity suitable for communication, storage, interpretation or processing within a certain context."

State of the Art in Digital Twins
Many DT applications can be found in the literature, applied to several fields such as smart cities, construction, healthcare, agriculture, cargo shipping, drilling platform, automobile, aerospace, electricity, etc. All these different applications originated the huge amount of DT definitions mentioned in the previous section.
With the increasing level of digitalization and increasing complexity of machines and products, DTs might be a kind of next generation of industrial simulation [60].
According to Malakuti et al., the DTs can act in the overall product life-cycle for design, manufacturing, operating and maintenance purpose, collecting relevant data for model-based simulation and prediction [59].
Tao et al. presented the benefits of the DT in the design phase [38]. In early and conceptual design stages where design directions are set, the DT can collect relevant data for, e.g., customer satisfaction, financial or economic plans into physical models. Therefore, the DT becomes a precise, particularly virtual, representation. This facilitates the communication between designers and customers especially in the current trend of increasing visualization in the product design process. Since functionality, configurations and design parameters are defined, checked and probably updated in the design phase, these adjustments must also be tested by simulation. Without the DT, there are no real time or environment-influenced data. This makes the DT a useful tool for virtual verification and testing without a fabricated product or prototype. The DT also uses experiences from the production of similar products or previous generations in the form of historical data. This gives the DT a high prediction capacity to test whether investment and product improvement plans are profitable as expected. Furthermore, the prediction makes it easier to detect and correct design failures saving time and money in the design phase.
For modern manufacturing, the monitoring and analysis during operation is a crucial issue to ensure long-term and reliable operation of equipment. In this context, Wang et al. [61] presented a DT reference model designed for rotating machinery fault diagnosis. To detect deviations from the standard, the DT allows the integration and interpretation of the physical knowledge and data measurements using simulation models, instead of relying solely on sensor data. Thus, the mechanism of typical failure models can be simulated in the DT to analyze the root cause of the failure and predict the evolution of the degradation process. Using these capabilities, the authors developed a pilot prototype of a rotor system to demonstrate the effectiveness of the DT model on unbalance quantification and localization for fault diagnosis. Experimental results show that the constructed rotor model can achieve accurate fault diagnosis and adaptive degradation prediction.
The DT also gives optimization opportunities in the production process. Since the DT has information about production assets and the availability of resources, it reflects the current state of machines, products and processes. Therefore, the consequences of decisions, e.g., the scheduling of manufacturing steps, can be imitated by the DT [62], which optimizes decision support in manufacturing plans [37] before executing any production process. The DT can also anticipate the consequences of countermeasures for detected malfunctions, misbehavior and failures before applications.
Since there is not yet a commonly agreed definition of Digital Twins, it is not surprising that there is also a lack of standards for the construction of DTs. Nevertheless, there are several approaches towards modeling and construction of DTs available in the literature that aim for supporting the system engineer in the DT construction process. In 2016, Jain and Lechevalier [63] proposed an approach towards automated generation of virtual factory models using manufacturing configuration data in standard formats as the primary input. One of the main objectives of the approach is to reduce the needed and expensive expert knowledge for the DT construction. Ding et al. [64] introduced a reference architecture for a version of DTs called "digital twin-based cyber-physical production system." This architecture relies on external construction of the virtual replica of the system and focuses on adding components and data flows for enabling automated control.
Damjanovic-Behrendt and Behrendt [65] explored the potentials of available open source tools and services, which can help developing an open source DT ex nihilo. They proposed a DT concept based on a micro-services architecture to design a flexible, open source solution for digital twins and make it accessible to a wider industrial and research audience. This remains however more of a conceptual proposal than an actual open-source tool for DT development.
Another reference architecture based on micro-services was introduced by the consortium of the research project MAYA [58]. The focus is on the structure of a middleware for the synchronization of the CPS and its virtual representation while accessing big data. The digital twin contains functional and behavioral models that are activated when the CPS logs into the centralized support architecture. Communication between CPS and the digital representative is enabled by a WebSocket channel. If the CPS disconnects from the support architecture, the digital twin and all its components are deactivated.
Within the research project AUTOWARE, an eponymous cognitive digital automation operation system was developed [66]. The framework includes a reference architecture, in which Digital Twins of (smart) products are included. According to the authors, AUTOWARE can support the efficient development and operation of cognitive manufacturing systems. The used digital twin is an extension of the software architecture developed in the project ReconCell [67]. Within ReconCell, a set of components for a robot workstation for automated assembly tasks is developed. One of those components is a Digital Twin of the work cell. The DT is based on the commercial 3D simulation software VEROSIM.
Tao et al. divided the DTs of subjects participating in the manufacturing context into a hierarchical model [68]. There are DTs for single components such as machines, material or equipment at unit level and DTs for production line, complex products or the shop floor at system level. Through interoperability and dependencies in cooperated applications and collection data over the whole life-cycle the system level DTs build up a SoS DT. Therefore, the DTs "co-evolve over the lifecycle of the product process." According to the authors, models play an important role for DTs to interpret and predict behavior. Since different models exists in deeper unit level for each subject, all of them have to be combined and integrated by the DT at the system level. The same challenge exists for DTs at SoS level with system DT models.
As the authors of [37,52,69] provided more extensive surveys on recent developments in the area of digital twins for manufacturing, we proceed with highlighting key enabling technologies for modeling digital twins, as presented recently by Rasheed et al. [70]. The authors clustered those key enabling technologies into five categories and conducted a survey on the state of the art in five categories: physics-based modeling, data-driven modeling, big data cybernetics, infrastructure and platforms and human-machine interface. Technologies for physics-based modeling tackle aspects of how to translate theories developed through observation of physical phenomena into mathematical equations and how to solve them. Examples for such technologies are 3D modeling and simulation techniques such as those presented in [71] and numerical solvers that are already integrated into numerical simulators such as the open-source software OpenFOAM that allows simulating fluid dynamics [72]. Data-driven modeling aims for the identification of patterns in large datasets, assuming that the data already encode information on physical phenomena. Enabling technologies include data generation techniques (e.g., crowd-based data gathering [73]), data privacy solutions [74], machine learning algorithms [75,76] and artificial intelligence approaches such as generative adversarial networks for denoising images [77]. Such techniques allow reducing the difference between real operational situation the real system is facing and the interpreted situation that the digital twin creates by sensor data of its physical counterpart. Big data cybernetics [78] merges control theory with big data and aims for steering the system to a given state. From control theory perspective, the behavior of the system is constantly monitored and the evolution is compared to the reference state. Big data approaches are used for increasing the understanding of the observations, which can be used for improving controller performance for the digital twin and its physical counterpart. Examples for such big data approaches are filtering techniques [79] and reduced ordering modeling [80], both aiming for identifying what is relevant in the dataset. Enabling technologies of digital twins in the area of infrastructure and platforms include IoT, cloud computing and 5G (see [70] for a list of approaches and commercial services in this area).
The increasing use of DTs in the several fields previously described led to the emergence of several software solutions dedicated to the DT implementation. As what happens with the definition of DT, usually, those software solutions are designed taking into account just the functionality that is needed in some moment for specific applications and the specific chosen definition for a DT. For this reason, the main software solutions are proprietary solutions designed by each player to accomplish the need of their product. There are many commercial software solutions that implement industrial DT technology, mainly developed by big companies of the manufacturing sector. The following is a non-exhaustive list of popular vendor DT technologies: • General electric (GE) developed an advanced and functional Digital Twin that integrates analytic models for components of the power plant that measure asset health, wear and performance. This DT can be integrated into the GE developed distributed predix platform for "large-scale machine data processing, management and analytics" and IIoT applications [81]. • PTC Windchill is a DT developed by PTC to help manufacturers across industries understanding how their customers are using their products. This way, they can help them to improve the design and performance of those products [82]. • 3DS is a DT developed by Dassault Systemes that allows manufacturers to make virtual products available to the market for experimentation and testing in realistic conditions before engaging in any real production [83].

•
Microsoft Azure DT Software is an IoT service that virtually replicates the physical world by modeling the relationships among people, places and devices in a spatial intelligence graph [84]. • Seebo DT is a graphical interface that allows the generation of actionable insights that maximize overall equipment effectiveness, reduce unplanned downtime and uncover the root cause of issues. Dashboards allow real-time visualization of the operational health of deployed machines and display enriched alerts with predictive metrics based on key machine parameters, such as machine temperature, pressure, vibration, humidity, fatigue and wear in order to quickly identify and solve issues remotely [85].

•
Anylogic software provides simulation capabilities in a single commercial package with special research licenses available. It is specialized in factories and production lines, with discrete-event simulation capabilities, and has libraries capable of supporting several types of fields [86]. The tool was used for a prototypical implementation of the data-driven DT generation approach in [64].

•
Ansys developed a DT that can be used to monitor real-time prescriptive analytics and test predictive maintenance to optimize asset performance. The DT can also provide data to be used to improve the physical product design throughout the product lifecycle [87]. • IBM developed a DT framework that helps companies to virtually create, test, build and monitor a product, reducing the latency in the feedback loop between design and operation. It enables identifying and fixing problems and bringing products to market more quickly [88]. • Factory I/O is a software developed by Real Games [89] that allows setting up configurable 3D-simulations by plug in components of a given industrial equipment catalog. To this end, the software provides simulation aspects of digital twins, explicit synchronization between real system and virtual replica is limited to the integration of several Programmable Logic Controller (PLC) for simulating the virtual factory.
• Siemens offers several services for constructing digital twins, including a machine-human interface [90] that can be used for the construction of a digital twin for humans and a portfolio called Digital Enterprise Suite [91], which includes, e.g., digital twins for material transport equipment.
Unlike proprietary products, open-source solutions allow the technology to be freely redistributable and modifiable, supporting manufacturers in combining older equipment with modern sensor-based machines and tools from different vendors. Moreover, open source hardware supports faster prototyping and customization, which helps manufacturers to accelerate the design and improve interoperation across actual lifecycle processes [66]. Although the open source community works hard to develop software and hardware solutions to be applied in Industry 4.0 and smart manufacturing, only few open source DT solutions are available: • CPS Twinning is a framework for generating and executing DTs that mirror cyber-physical systems [92]. It is a proof of concept that can be used as first approach to model some environments, but also has some limitations such as inability to generate DTs for wireless devices [93].

•
Wrld3d is an open source platform that allows the creation of DTs in a quick and easy manner, using a comprehensive set of self-serve tools, SDKs, APIs and location intelligent services. As a dynamic 3D mapping platform, it allows to create virtual indoor and outdoor environments upon which data from sensors, systems, mobile devices and location services can be visualized within millimeter accuracy [94].

•
Mago3D is a platform for visualizing massive and complex 3D objects including building information modeling (BIM) on a web browser. Thus, it is possible to model DTs that creates parallel "worlds" in a virtual reality with several sensors [95]. • i-Maintenance toolkit enables to create a DT of an industrial asset in order to obtain information on the status of all components related to the production and maintenance of the industrial process, collect, monitor and analyze life-cycle data. It is composed of a messaging system, a set of adapters to integrate sensor/actuator systems and other software components that are used as a technical foundation for the DT development [96]. Summarizing the literature review, we emphasize the following statement of Lu et al. [42]: "Though studies have reported the potential application scenarios of Digital Twin in manufacturing, we identified that current approaches to the implementation of Digital Twin in manufacturing lack a thorough understanding of Digital Twin concept, framework, and development methods, which impedes the development of genuine Digital Twin applications for smart manufacturing." Several tools already exist for building and running digital twins, but most are specialized for constructing a DT for a specific component, losing sight of the connection and dependencies to their environment. Hence, there is a lack of methods for increasing the understanding of effects of the (mis-)behavior of a component on the overall system such as a factory or a network of factories. This aspect is related to the findings of Tao et al. [38] in 2018, who identified gaps in the usage of DTs in control applications.

Cyber-Range Concept and Definition
The term of Cyber-Range (CR) is probably less popular than that of DT and potentially even more ambiguous as it was originally forged in the context of military use for cyber-defense professional training, before it became a standard of the cybersecurity industry to designate more widely simulation, testing and training platforms. It takes its roots in an analogy with Shooting Ranges, which are specialized facilities designed for firearms qualifications, training or practice [104].
The rationale for broadening the definition and scope of application of CRs is that an IT network simulation capacity, realistic enough to support training of cybersecurity professionals, can equally serve a number of other use-cases. CRs have been used as testbeds for the development of novel detection algorithms, as virtual environment for the demonstration of new security products [105], as a means for massive realistic data generation to train novel AI algorithms [106], as testing environment for the certification of IT and security equipment [107] and as instruments for decision making in cyber-incident response [108].
The classical attributes of a Cyber-Range are: •

State of the Art in Cyber-Ranges
While the above definition intentionally accepts the many known uses of CRs, a look at the state-of-the-art technology reveals most available products essentially target the market of training services. The following is a non-exhaustive list of commercially available Cyber-Ranges and a description of their main characteristics: • Airbus Cyber-Range is a proprietary development from Airbus Cybersecurity. It exists in three versions, as a fixed turnkey platform, as a mobile platform and as a service in the cloud. It has a drag and drop design interface for network modeling, a large catalogue of attacks and equipment templates and a growing OT modeling and simulation capacity. It is customizable without professional services [109]. Ravello CR is a proprietary development from SimSpace. It exists in fixed, mobile and cloud versions and provides an industrial virtualization layer, a modern web management interface, a drag and drop design interface, a hardware network traffic generator, a scenario engine and a LMS. It is customizable without professional services [114].

•
Cisco CR is a proprietary development from Cisco. It is available as fixed platform or in the cloud. It provides an industrial virtualization layer, a large attack catalogue and a scenario engine [115].

•
Cdex CR is a proprietary development from Vector Synergy. It provides an industrial virtualization layer, an attack catalogue and a scenario engine [116].
As this list suggests, most CRs are proprietary products, as the dominant VM technologies are. Few CRs have developed a community approach to support exchange of templates or scenarios by users, with adequate access control and user-rating mechanisms to ensure legality of use and quality of content. Although many vendors pretend to address equally IT and OT infrastructures, most CRs on the market currently do not integrate OT modeling beyond network level. OT CRs typically include virtual or physical PL equipment and SCADA (Supervisory Control and Data Acquisition) software as well as traffic representative of industrial protocols.

Embedding Human Behavior Modeling Capacity
One of Industry 4.0 challenges is the way technology modifies the role of human workers, which becomes collaboration with robots in the shop floor, rather than manual tasks or automated tasks supervised by human operators [117]. With this mind, it is crucial that the human dimension also be included in the DT for an overall factory modeling. The relevance of human behavior consideration is crucial from the manufacturing optimization point of view, but also to foster safety in the FoF. However, as can be confirmed in the state of the art, this area is in a very early stage of development [57,118]. In particular, validation of the models, collection of real worker data to provide model inputs and comparison of the simulation results with the manufacturing systems, as well as sensitivity analysis, are significant innovation fields where further research is needed. The integration of human behavior modeling in the DT will enable to improve factory design both from a performance and from a resilience perspective [50]. Beyond its usefulness for design optimization, such DT can also support system configuration management, upgrade validation, software update testing before deployment or assessment of resilience towards new threats. Moreover, it makes sense that the DT remains connected to real factory in operation to support situational awareness [108] and provide interactive interfaces to factory management and operators.
In the CyberFactory#1 project, we are not interested in modeling the human behavior for an ergonomic evaluation or even a human simulation. Our aim is to use real worker data to develop models that will allow its inclusion in the FoF modeling activities, through DT development. It is important to note that the common way of integrating the human part of the system into the DT raises some problems. Usually, all decision-making tasks are transferred to the technical system and the human is used only as a support for process execution. This kind of integration can be seen as a devaluation of the work of the human workers, which can bring difficulties to the integration and development of the DT and acceptance by the workers [119]. It is crucial that the human part of the system can be seen as an invaluable part of the entire system: humans can bring, for example, flexibility and problem-solving competences to all process.
Graessler and Poehler [57] developed a human DT which assumes the communication and coordination tasks of the employee with the production system and acts as a representative. Their aim was creating a way to maintain a possibility for the employee to contribute and manipulate the production system while simultaneously using computer technologies for an automation of planning and control processes. Therefore, the developed human DT is a cyber-physical device of its own; it is connected to the cyber-physical production system (CPPS) and tries to emulate the human employee through dynamically adapted values of a database, which represent for example properties, preferences, work schedule and skillset. It is also possible for the human employee to control their DT via a mobile device.
Buldakova and Suyatinov [118] developed models for assessing the status of the human operator in cyber-physical systems. These models are used to assess the functional state of the human operators. They are based on the concept of a hierarchical representation of a virtual model of a complex system and a synergistic method of basic models. Psychological tests were performed on human operators and used to estimate the parameters of the DT model. In addition, the cardiovascular system of the human operators was also modeled to obtain information about its functional states: at rest and with psycho-emotional stress. Therefore, the model developed by the authors includes several important features to model the human operator in cyber-physical systems, such as the strength of nervous processes, the severity of emotional stress, age, blood cholesterol and body mass index.
The human behavior can greatly affect all the manufacturing processes. For example, the various interfaces which need human inputs can be highly error-prone. A stressed or fatigued worker can cause a problem in a machine or a lack of attention during the job can affect the final product [120]. Moreover, it can be very difficult to distinguish if these mistakes are accidental or malicious. To determine this, understanding human intention by comparison with normal behavior patterns is required. Physical human behavior modeling has been intensively addressed in fields of robotics, security and facility management, to support optimization and anomaly detection related use-cases. Human behavior in the digital sphere has been addressed by sophisticated detection techniques known as User and Entity Behavior Analysis (UEBA) [121]. However, understanding a chain of events revealing human behavioral patterns across cyber and physical spheres remains an unaddressed challenge. In the scope of the CyberFactory#1 project, such a capability will be developed and demonstrated.
Several techniques will be used to develop the human behavior modeling (Figure 3). The real time monitoring of the workers to understand their behavior will be crucial to gather information about what is happening on the shop floor regarding human workers. This data will allow the extraction of features that will help in the definition of profiles for users. Profiling will be very useful to understand whether the behavior of each worker, in a certain time, is in accordance with the usual or not. Therefore, it will be easier for the system to understand if unusual activities are taking place on the shop floor, to predict impacts on processes and costs and ultimately recommend corrective actions. This is important to improve the safety of all manufacturing system, as well as to reduce maintenance times and costs. about what is happening on the shop floor regarding human workers. This data will allow the extraction of features that will help in the definition of profiles for users. Profiling will be very useful to understand whether the behavior of each worker, in a certain time, is in accordance with the usual or not. Therefore, it will be easier for the system to understand if unusual activities are taking place on the shop floor, to predict impacts on processes and costs and ultimately recommend corrective actions. This is important to improve the safety of all manufacturing system, as well as to reduce maintenance times and costs. The interaction between the machines and the human workers will also be monitored and modeled. It will allow monitoring the human behavior and inferring the worker's state of mind. For example, it would help understanding if the worker is too tired or stressed, or if someone is performing a malicious action. To avoid interfering with the regular working space and tools, our approach will rely on non-intrusive and non-invasive monitoring and detection of the human condition, with a focus on their mental state, such as fatigue, sleepiness or lack of attention. Figure 3 illustrates the concept we use, where the standard interaction means with equipment, systems and applications, such as the usual peripherals as mouse and keyboards. Artificial intelligence techniques will then be used to build workers profile and predict and detect unusual behaviors, analyzed from the optimization, safety and security points of view.
The actuators on the human, as illustrated in Figure 4, at the strategies/control module, come from a module where several strategies such as gamification, affective computing and mental chronometry can be used to achieve better results. Gamification approaches use game design elements in non-ludic contexts to motivate and reward the learning process [122]. Several strategies can be used such as points, merit badges, leaderboards and tasks with increasing difficulty to motivate an involvement of the user in the learning process. In manufacturing context, gamification can help the worker to understand a new technology that needs to be used or even to optimize his efficiency and motivation [123]. Reinforcing a positive worker behavior can also be done using gamification. There are safety and health systems that reward workers with game points on the detection of a specific The interaction between the machines and the human workers will also be monitored and modeled. It will allow monitoring the human behavior and inferring the worker's state of mind. For example, it would help understanding if the worker is too tired or stressed, or if someone is performing a malicious action. To avoid interfering with the regular working space and tools, our approach will rely on non-intrusive and non-invasive monitoring and detection of the human condition, with a focus on their mental state, such as fatigue, sleepiness or lack of attention. Figure 3 illustrates the concept we use, where the standard interaction means with equipment, systems and applications, such as the usual peripherals as mouse and keyboards. Artificial intelligence techniques will then be used to build workers profile and predict and detect unusual behaviors, analyzed from the optimization, safety and security points of view.
The actuators on the human, as illustrated in Figure 4, at the strategies/control module, come from a module where several strategies such as gamification, affective computing and mental chronometry can be used to achieve better results. Gamification approaches use game design elements in non-ludic contexts to motivate and reward the learning process [122]. Several strategies can be used such as points, merit badges, leaderboards and tasks with increasing difficulty to motivate an involvement of the user in the learning process. In manufacturing context, gamification can help the worker to understand a new technology that needs to be used or even to optimize his efficiency and motivation [123]. Reinforcing a positive worker behavior can also be done using gamification. There are safety and health systems that reward workers with game points on the detection of a specific behavior. For example, the use of required protective equipment can be validated by cameras, or wash hands that can be detected by smart soap dispensers [124]. However, gamification introduction can bring some challenges: it requires the voluntary participation of the workers, the results may decrease along time and monetization creates competition between employees which can demotivate them in the long term [123]. Nonetheless, the positive impact of gamification can be so beneficial that it is certainly an interesting field to explore and apply to the Factories of the Future.  Affective computing can also be very useful to understand human behavior, by interacting emotionally with the workers, recognizing emotions and expressions and communicating through them [125]. Several techniques can be used to recognize the emotions, such as emotional speech perception [125,126], physiological measures [125], gestures and body movement [125] and textual and facial expressions [126]. Combining all these techniques can be a good way to increase the accuracy of the recognition process [126]. The recognition of emotions can be used for example to understand the state of fatigue or stress of the worker [127]. For example, drowsiness detection is largely used to understand whether a driver of a vehicle is driving in a dangerous state [128].
Mental chronometry that allows the measurement of reaction times can also be a useful tool to understand the human behavior and ensure the safety of the system. Reaction time tests allow the measurement of cognitive efficiency, cognitive decline, early attention complaints and memory impairments [129]. Thus, integrating these tests into the production process in a non-intrusive way can help to understand the state of mind of a worker. For example, if we figure out that the worker is taking some time to respond to an alert in the machine, we can trigger some tests (e.g., go/no-go tests) to try to understand what is happening. According to the test result, we can infer if the worker is facing some stress problem or if someone is trying to perform a malicious action.
As a conclusion, several techniques can be applied to understand the human behavior in manufacturing process and to detect, and even prevent, cognitive overload or underload, which can cause loss of performance or potential dangerous behaviors that could raise safety and security concerns. This will also be the basis for an improved ethics, responsibility and accountability of workers in the context of an increased digitization that bring humans to collaborate with robots. However, it is crucial to model all key assets of the manufacturing process, whether human, material or digital, to provide useful decision support for FoF optimization and resilience. For this, a co-simulation capacity is required. Affective computing can also be very useful to understand human behavior, by interacting emotionally with the workers, recognizing emotions and expressions and communicating through them [125]. Several techniques can be used to recognize the emotions, such as emotional speech perception [125,126], physiological measures [125], gestures and body movement [125] and textual and facial expressions [126]. Combining all these techniques can be a good way to increase the accuracy of the recognition process [126]. The recognition of emotions can be used for example to understand the state of fatigue or stress of the worker [127]. For example, drowsiness detection is largely used to understand whether a driver of a vehicle is driving in a dangerous state [128].

Enabling Co-Simulation for Holistic FoF Modeling
Mental chronometry that allows the measurement of reaction times can also be a useful tool to understand the human behavior and ensure the safety of the system. Reaction time tests allow the measurement of cognitive efficiency, cognitive decline, early attention complaints and memory impairments [129]. Thus, integrating these tests into the production process in a non-intrusive way can help to understand the state of mind of a worker. For example, if we figure out that the worker is taking some time to respond to an alert in the machine, we can trigger some tests (e.g., go/no-go tests) to try to understand what is happening. According to the test result, we can infer if the worker is facing some stress problem or if someone is trying to perform a malicious action.
As a conclusion, several techniques can be applied to understand the human behavior in manufacturing process and to detect, and even prevent, cognitive overload or underload, which can cause loss of performance or potential dangerous behaviors that could raise safety and security concerns. This will also be the basis for an improved ethics, responsibility and accountability of workers in the context of an increased digitization that bring humans to collaborate with robots. However, it is crucial to model all key assets of the manufacturing process, whether human, material or digital, to provide useful decision support for FoF optimization and resilience. For this, a co-simulation capacity is required.

Enabling Co-Simulation for Holistic FoF Modeling
As discussed above, the FoF consists of a huge number of connected components covering a huge number of application fields. According to Lu Y. et al. [42], the FoF DTs can be built for manufacturing assets, people, factories and production networks, every component mirrored by DTs. To test the influence of single components on the overall production process and to identify optimization and security gaps for the FoF, it is important to understand the interoperability and dependencies of those components. While building one overall DT for the FoF ex nihilo seems utopic, it appears more realistic to integrate individual DTs according to the hierarchical structure of the FoF. This requires the possibility to connect DTs to mirror the relations between their physical counterparts. Such a composition capability of DTs was presented by Malakuti et al. [59]. According the authors, DTs can be composed in the following ways: • hierarchically such as manufacturing assets building a production environment; • by association such as producers and consumers in production lines; and • peer-to-peer, as a network of systems with similar behavior or inputs and outputs.
The real power of a DT-and why it could matter so much for Industry 4.0-is that it can provide a near-real-time comprehensive linkage between the physical and digital worlds. DT promise richer models that yield more realistic and holistic measurements of unpredictability. Thanks to cheaper and more powerful computing capabilities, these interactive measurements can be analyzed with modern-day massive processing architectures and advanced algorithms for real-time predictive feedback and offline analysis. These can enable fundamental design and process changes that would almost certainly be unattainable through current methods.
The creation of the DT encompasses two main areas of concern: • DT process design and information requirements: It is important to link the process flow to the applications, data needs and the types of sensor information required to create the DT. The process design should also be concerned with attributes and features that allows the improvement of cost, time or asset efficiency. These typically form the base line assumptions from which the DT enhancements should begin. • DT conceptual architecture: The architecture should represent a model of a manufacturing process in the physical world and its companion twin in the digital world. The DT serves as a virtual replica of what is actually happening on the factory floor in near-real time. Thousands of sensors distributed throughout the physical manufacturing process collectively capture data along a wide array of dimensions: from behavior characteristics of the productive machinery and works in progress (thickness, color qualities, hardness, torque, speeds, etc.) to environmental conditions within the factory itself. These data should be continuously communicated to and aggregated by the DT application.
The DTs in the FoF spread up to different scopes and application fields. The single DTs for components and networks which are involved in production will be developed from different experts and engineers, which lead to various concepts and structures. Composing these DTs together might lead to integration issues.
To bring a complex system of various and probably independently developed components together, Gomes et al. [130] introduced co-simulation as a suitable approach. Co-simulation "as the coordinated execution of two or more models that differ in their runtime environments" [131] differs from other simulation types by the amount underlying models and the amount of solving engines.
Using one solver or calculating engine with one model for the overall system is called a "classic simulation" ( Figure 5); with "parallel simulation," one model is executed with different solvers. Considering the diversity of domains both kinds of simulation do not fit to the FoF because they require a unique model to solve. Different scopes come up with different model types which makes it difficult to build an overall model. A "hybrid/merged simulation" allows the execution of more than one model but still with one single solver. Different developers may use their own computing platform or engine, for example because they are optimized for their specific scope. Even if a combination of independently developed solvers is possible, it requires integration effort and can be suboptimal for each application field. Co-simulation offers the possibility to execute different models on their individual computing engine. All models can be executed simultaneously, hence reflecting the interdependencies between the modeled components. Therefore, it is the most distributed and flexible simulation type. Models and their solvers can be seen as a single exchangeable entity which enables the simulation and test of hardware and software implementations [131]. Gomes et al. [130] defined co-simulation units as a "replacement of real system, ready to take inputs and produce a behavior trace." Depending on their timing behavior, these units decompose to Discrete Event (DE) and Continuous Time (CT) units. DE units are represented by a state valuation and change their values only at discrete timestamps. For instance, this can be a production system reacting on an input event by a production output event. The orchestration of multiple DE units can be done by event handling. The High-Level Architecture (HLA) is introduced as a suitable standard which "focuses on interoperability and reusability of components and offers time management interoperability" [133]. CT units vary over a whole timeframe for instance to represent physical behavior. The orchestration can be done by value exchange in constant time frames. A suitable standard for such value exchange is the Functional Mock-up Interface (FMI) by considering the CT units as Functional Mock-up Units (FMU) [134].
Even if there are standards, Gomes et al. pointed out that the usage of standardized interfaces does not equalize the simulation units [130]. The coding of an orchestration is still necessary and a real challenge. Moreover, the HLA and FMI standards are not suitable to couple DE and CT units. Therefore, the authors presented a hybrid approach as a possible solution where one type is converted to the other by a wrapper unit:


Hybrid DE wraps every CT unit as a DE simulation unit and uses a DE-based orchestration.  Hybrid CT wraps every DE unit to become a CT unit and uses a CT-based orchestration.
To realize these orchestrations, many tools and frameworks including industrial standards already exist and are used in production environment. Even if such a hybrid approach is a tough task, co-simulation can combine FoF components with different time models.
With these capabilities, co-simulation becomes an important role for the integration of multiple DTs in the FoF. Negri et al. [135] already presented the simulation of a complex system as an opportunity to "mirror the life of the corresponding digital twins." Considering the work of Rosen et al. [48], the FoF consists of independent self-managed production units, this setup is also captured by Gomes et al. [130] defined co-simulation units as a "replacement of real system, ready to take inputs and produce a behavior trace." Depending on their timing behavior, these units decompose to Discrete Event (DE) and Continuous Time (CT) units. DE units are represented by a state valuation and change their values only at discrete timestamps. For instance, this can be a production system reacting on an input event by a production output event. The orchestration of multiple DE units can be done by event handling. The High-Level Architecture (HLA) is introduced as a suitable standard which "focuses on interoperability and reusability of components and offers time management interoperability" [133]. CT units vary over a whole timeframe for instance to represent physical behavior. The orchestration can be done by value exchange in constant time frames. A suitable standard for such value exchange is the Functional Mock-up Interface (FMI) by considering the CT units as Functional Mock-up Units (FMU) [134].
Even if there are standards, Gomes et al. pointed out that the usage of standardized interfaces does not equalize the simulation units [130]. The coding of an orchestration is still necessary and a real challenge. Moreover, the HLA and FMI standards are not suitable to couple DE and CT units. Therefore, the authors presented a hybrid approach as a possible solution where one type is converted to the other by a wrapper unit:

•
Hybrid DE wraps every CT unit as a DE simulation unit and uses a DE-based orchestration.

•
Hybrid CT wraps every DE unit to become a CT unit and uses a CT-based orchestration.
To realize these orchestrations, many tools and frameworks including industrial standards already exist and are used in production environment. Even if such a hybrid approach is a tough task, co-simulation can combine FoF components with different time models.
With these capabilities, co-simulation becomes an important role for the integration of multiple DTs in the FoF. Negri et al. [135] already presented the simulation of a complex system as an opportunity to "mirror the life of the corresponding digital twins." Considering the work of Rosen et al. [48], the FoF consists of independent self-managed production units, this setup is also captured by co-simulation. Gomes et al. [130] described co-simulation even as a holistic development and simulation process for a system with independent components. Since co-simulation units can also have a hierarchical structure and reflect their physical behavior, it offers an overall modeling and simulation approach with multiple levels of resolution for several domains.
Co-simulation is also domain-agnostic and covers the several scopes tackled by the FoF. Therefore, it solves a modeling and simulation problem at the complete production process level taking into account the interaction between tangible and intangible assets, cyber-physical systems and networks. According Pedersen et al. [136], human behavior and interaction models are compatible with co-simulation. As described above, such an integration of human models is high relevant for improving the application of DTs in the FoF.

Enhancing FoF Optimization
As sketched in Section 2, the request for optimization is the key motivator for industrial progression and innovation. DTs of specific components of a factory such as modern machines are already successfully applied for monitoring and optimization of their physical counterparts (cf. Sections 3.1 and 3.2). Nevertheless, solely optimizing the performance of individual systems is limiting the achievable level of optimization: while a machine can be optimized regarding its own key performance indicators such as production rate, maintenance needs, energy consumption and fault tolerance, this local optimization does not necessarily lead to overall improvement of the factory performance. Additionally, the optimal performance and functionality of the FoF does not need to be reached by independent optimization of its components, since the overall optimization also depends on the interaction of components. The combination of DTs as depicted in Section 4.2 as representation of the FoF as System of Systems with interacting components is a promising approach for enhancing the FoF optimization.
In the state of the art on DTs, first steps towards DTs with integrated representation of dependencies between several factory components are done. In 2017, Zhang et al. [54] presented a new kind of DT-based model for individualized designing of hollow glass production line, combining the custom design theory, basic synchronization technology and an intelligent multi-objective optimization algorithm. The proposed model allows the digital simulation of the production line performance in the real world. The authors validated the proposed DT-based model in hollow glass production line. Its success has led the authors to claim that it can also be applied into many automated flow type manufacturing systems, such as custom-made furniture production line and 3C product manufacturing line.
Vachálek et al. [53] presented a work that addresses the optimization of an experimental production process, involving the continuous processing of process data, using a DT. The DT is formed by the physical production line and its digital "copy." Using the DT, the authors demonstrated the interaction of the real production processes with a digital simulation model. This interaction brings new insights into the dynamics of the production process. It is possible, for example, to monitor the consequences of changing certain parameters on the production line.
Tao and Zhang [39] widened the scope of DTs from individual machines or separated production lines to shop floors: The concept of the DT shop floor (DTS) is based on a DT that provides an effective way to reach the physical-virtual convergence. DTS allows the physical and virtual vision of the shop floor. The following four DTS parts are presented and implemented by the authors: The physical shop floor (PS) includes the entities that exist in the physical space, such as humans, machines and materials. The virtual shop floor (VS) consists of models built in several dimensions, including geometry, physics, behavior and rules. VS works with PS, providing control orders for PS and optimization strategies for the next component. The shop floor service system (SSS) is an integrated service platform which encapsulates the functions of the Enterprise Information System (EIS), the computer aided tools, the models, the algorithms, etc., into sub-services. Finally, the shop floor digital twin data (SDTD) include the individual data from PS, VS and SSS as well as the existing methods for modeling, optimizing and predicting. The authors also described some key technologies that help in the implementation of the DTS. Therefore, it can be said that, as any research in the initial stage, this work provided an insight into DTS and a guideline for the future work. Later, the same authors [137] applied the knowledge gathered to propose a framework of equipment energy consumption management in DTS.
Nikolakis et al. [138] proposed an implementation of the DT approach as part of a wider cyber-physical system (CPS) to enable the optimization of the planning and commissioning of human-based production processes using simulation-based approaches. A combination of low-cost sensors (optical, force and torque) are used to record the human motion (e.g., walking, picking objects, placing objects and carrying objects) in the shop floor. The data collected by the sensors are then used for improving the ergonomics of warehouse operations as well as the reconfiguration of the station towards improved ergonomics. The experimentation can occur without interrupting the real production process. Thus, the developed system provides a digital testbed for empowering production-wise applications such as human resources planning, station design and reconfiguration, as well as rapid prototyping.
In the CyberFactory#1 project, we are aiming for DTs that do not simply reflect separated components or small networks of systems such as production lines or shop floors, but also support a holistic approach, considering multiple components of the FoF and their interactions for optimization on FoF level. Key enablers of such a global optimization approach are the integration of DTs of the individual systems such as machines, transport robots for logistics, production lines, logistic systems between different factories and complete factories into one great DT (e.g., by using a co-simulation approach, cf. Section 3.2) and a central data lake. The data lake stores and provides information on previous recorded information on states of the FoF Figure 6 sketches the relation between physical FoF, its DT and services for optimization of the FoF.
Based on information provided by the data lake, predictions on aspects of the FoF behavior can be done. Those predictions can be used for feeding the DT, allowing the use of the DT for simulating future behavior. Careful prediction can be used for resolving potential non-deterministic behavior of the DT (and of its physical counterpart). Based on such predictions, the DT can be used for testing and analyzing reconfiguration alternatives of the FoF in specific situations without having to apply the reconfigurations in the physical (real) FoF. The result of the service on decision support for reconfigurations is given to the physical FoF, such that the chosen reconfiguration can be applied.
Combining the prediction and reconfiguration services above, the FoF DT can be used to understand the consequences of reconfiguration alternatives ("strategies") on the material supply chain, leading to supply chain optimization. By using a DT that represents not only the systems on one production floor, but several production lines and infrastructure between different production facilities, consequences of reconfigurations in individual systems can be seen, even if only systems/production units spatially far away are affected. The FoF DT as result of a holistic and comprehensive FoF modeling and simulation approach enables the analysis of reconfiguration alternatives without having to apply all alternatives in the physical FoF, thus avoiding expensive production downtimes.
The consideration of a network of DTs instead of separated digital representations of individual systems enables the integration of methods that do not only predict needed maintenance of individual systems, but also advanced optimization aspects such as equal wear and tear over several systems. Consequently, the maintenance of several systems can be done at the same time, reducing the number of visits of external service teams, service costs and potential downtimes of the FoF due to maintenance times. In the CyberFactory#1 project, we plan to use a DT of a network of systems e.g., for predicting which configurations of the individual systems lead to best results regarding equal wear and tear.
In summary, several operational benefits towards FoF optimization are enabled by complementing existing methods for optimization of individual systems by a holistic approach using a FoF DT which is able to comprehensively represent interdependencies of the FoF components.
wards improved ergonomics. The experimentation can occur without interrupting the real production process. Thus, the developed system provides a digital testbed for empowering production-wise applications such as human resources planning, station design and reconfiguration, as well as rapid prototyping.
In the CyberFactory#1 project, we are aiming for DTs that do not simply reflect separated components or small networks of systems such as production lines or shop floors, but also support a holistic approach, considering multiple components of the FoF and their interactions for optimization on FoF level. Key enablers of such a global optimization approach are the integration of DTs of the individual systems such as machines, transport robots for logistics, production lines, logistic systems between different factories and complete factories into one great DT (e.g., by using a co-simulation approach, cf. Section 3.2) and a central data lake. The data lake stores and provides information on previous recorded information on states of the FoF Figure 6 sketches the relation between physical FoF, its DT and services for optimization of the FoF. Based on information provided by the data lake, predictions on aspects of the FoF behavior can be done. Those predictions can be used for feeding the DT, allowing the use of the DT for simulating future behavior. Careful prediction can be used for resolving potential non-deterministic behavior of the DT (and of its physical counterpart). Based on such predictions, the DT can be used for testing and analyzing reconfiguration alternatives of the FoF in specific situations without having to apply the reconfigurations in the physical (real) FoF. The result of the service on decision support for reconfigurations is given to the physical FoF, such that the chosen reconfiguration can be applied.
Combining the prediction and reconfiguration services above, the FoF DT can be used to understand the consequences of reconfiguration alternatives ("strategies") on the material supply chain, leading to supply chain optimization. By using a DT that represents not only the systems on one production floor, but several production lines and infrastructure between different production facilities, consequences of reconfigurations in individual systems can be seen, even if only systems/production units spatially far away are affected. The FoF DT as result of a holistic and comprehensive FoF modeling and simulation approach enables the analysis of reconfiguration alternatives without having to apply all alternatives in the physical FoF, thus avoiding expensive production downtimes.

Fostering FoF Resilience
The analysis of the state of the art in Section 2 suggests that progress in DTs and CRs will soon lead to a point of convergence at the intersection between cyber-physical equipment and manufacturing network [49]. However, no known attempt to perform also security tests and simulation in a DT has yet been reported. DTs to date do not provide any simulation to support the understanding of effects of a cyber-incident affecting the represented system in its physical and logical dimensions [93]. This is a very interesting and important point of research since it would give a new dimension to DTs, enabling to support cybersecurity design, training and simulation targeting industrial environments. DTs effectively support applications for safety, taking benefit of their ability to compare actual behavior of machines with normal patterns. Application for security essentially requires further analysis of incident root cause and qualification as intentional or non-intentional [47]. Such analysis could be done based on traces which can be retrieved on the network. In a context where manufacturing automation grows ever more connected and autonomous, the qualification of anomalies either as attacks or incidents becomes more than a secondary detail. In the case of a safety problem, the response is essentially about activating fail safe mode, finding and fixing the issue and bringing the system back to service. In the case of a security problem, one must anticipate the possible next actions of the attacker and prevent them in a predictive way, while identifying the best solution to remediate and recover from already caused damages. Fail-safe and fail-secure in many ways can be antagonist principles [139].
CRs have been developed, primarily used in cyber-defense with a focus on network and information security [104]. A handful of cyber-range editors are currently in the process of adapting their CR to the needs of OT networks and Industrial Control System (ICS) security. In short, while DTs aim to increasingly address the IT layer of Cyber-Physical Systems (CPS), CRs aim to more accurately simulate the physical processes which lie underneath OT networks [109,110,112]. By bridging the formidable performance of DTs in application to safety with the unmatched relevance of CRs in application to security, the CyberFactory#1 project enables to address the growing issue of security for safety in application to manufacturing. Just as CRs can be informative to DTs about the origin and nature (intentional/accidental) of a detected anomaly, DTs can be informative to CRs about the chain of impacts of a particular incident on the manufacturing process [49]. Such information is extremely valuable when it comes to selecting the best set of countermeasures and defining an optimal response plan. It is of upmost importance for economic actors, such as factory owners, to minimize attack and response cost, including physical damages, potential casualties and indirect impacts on business, intellectual property or reputation. In short, while DTs inform about physical processes, CRs inform about network traffic, bridging the gap between physical and digital layers. Now, to give the complete picture, further developments are required to better include human behavioral aspects, both in the physical world and on the network.
The use of cognitive AI is of certain support to fill this gap. DTs, through high-fidelity models and advanced machine learning abilities, can capture tacit knowledge by observation of human operators, identify deviations from expected behavior and raise meaningful alerts with consideration for safety and/or security. Most of the published work related to DT is in representing entities such as machines, production lines, robots, etc. Fewer works also try to model human behavior, but the focus in these cases are in modeling and simulating human processes by mean of digital manikins [35,140]. Much importance is given to the ergonomic evaluation, i.e., to the creation of physical models and their analysis through different real-world experiments [140][141][142]. The digital human simulation is also explored by some authors. State-of-the-art systems in this field can be mentioned [129], such as Siemens Tecnomatix Jack [143], Dassault Delmia Human [144] and RAMSIS [145]. To optimize these systems, Motion Capture systems are also used to capture real worker movements, edit and animate them [146][147][148]. However, the results of this kind of simulations are not so accurate since, in some way, they try to model humans as technological equipment, but human behavior is very different from a machine. More importantly, detecting human deviations based their behavior in the physical space will most often inform about threats after they materialized into damages. To grow predictive, we must observe user behavior on the network.
CRs can provide insights on anomalous user behavior observed on the network and support virtual assessment of incident response strategies before implementation on real infrastructure. The use of UEBA techniques will provide evidence of attack intentions even in preparatory phases and thus enable predicting rather than responding to cyber-threats [121]. UEBA algorithm training requires nothing but real or highly realistic traffic. When it comes to OT traffic, the issue is that such traffic is highly business-specific and in most cases confidential. By consideration for safety, process continuity and confidentiality issues, factory owners generally prohibit any active intervention on their OT network and do not easily share real industrial data at needed scale and representativeness. The algorithm training should thus be done on the actual infrastructure with the required security clearance or based on synthetic data, which a virtual replica of the actual OT could generate in a CR environment. This makes OT CRs a very strategic tool for detection algorithm training [106]. Beyond this, it forms a perfect playground for cyber-defenders to test and improve their response strategies in virtual environment without affecting operational manufacturing systems [47].
The ability to correlate human behavior patterns in the physical and in the cyberspace is key to prevent insider-threat, which accounts for 60% of cyber-attacks to date [50]. In a context where machines grow more autonomous, the relation between human operators and factory automation can substantially change. While some foresee the advent of unmanned factories, the law reminds us that the human remains in any case accountable [40]. With CyberFactory#1, we intend to leverage the complementary strengths of humans and machines by enabling collaboration and mutual learning. While we do that, we also make due considerations to scenarios in which such a relationship turns wrong. Typical scenarios in the context of FoF are lack of vigilance of a human worker, due to extreme automation of his activities leading to cognitive underload (a safety issue) [117]; or deliberate miss-training of a collaborative robot by a human operator for sabotage purpose (a security issue). In addition, we found that skilled cyber-attackers have proven abilities to exploit basic safety mechanisms, known as "fail open mechanisms," to their advantage [139]. To envisage a future where safety and security go hand in hand, we need to leverage the complementary strengths of DTs and CRs.
It is expectable that DTs and CRs which have been developed in different environments, by different vendors to fulfill different needs, will not spontaneously be brought to interface, communicate, co-simulate and agree on a common understanding of the risk level of a complete manufacturing ecosystem. Much development work is required to achieve such synergies and such development can hardly be done without proper collaboration of editors from automation and security sectors. By implementing the above-described co-simulation capability, the CyberFactory#1 project initiates a process of integration which will help industrial safety and security professionals coming to an agreement on risk management techniques and approaches addressing more effectively the types of threats which the FoF will face. Similar to optimization, achieving enhanced FoF resilience requires a holistic understanding of the threats, their impacts and the adequate countermeasures across physical and cyber-spaces. In the context of distributed factories, new resilience mechanisms can be triggered which rely on dynamic reconfiguration and redistribution of production tasks [149]. This is how resilience can be achieved not solely at machine or plant level, but eventually at sectorial or even societal level.

Aerospace Manufacturing Implementation Case
Building upon the Aerospace Manufacturing use-case described in Section 2.4, we describe here the implementation of the design principles explained in Section 4 to the pilot factories of Airbus Defence and Space in Tablada, San Pablo Sur and Cadiz. Figure 7 describes the overarching architecture including: (i) on lower layers, the IIoT infrastructure to be deployed across the three sites; and (ii) on the upper layer, the modeling and simulation capacity here over described.

Aerospace Manufacturing Implementation Case
Building upon the Aerospace Manufacturing use-case described in Section 2.4, we describe here the implementation of the design principles explained in Section 4 to the pilot factories of Airbus Defence and Space in Tablada, San Pablo Sur and Cadiz. Figure 7 describes the overarching architecture including: (i) on lower layers, the IIoT infrastructure to be deployed across the three sites; and (ii) on the upper layer, the modeling and simulation capacity here over described.

Operator Behavior Modeling on Aerospace Shop Floor
For the purpose of this use-case, we will monitor the behavior of human workers in the handling of gap guns, using retrieved tool data, shop floor sensors and wearables. The behavioral data will

Operator Behavior Modeling on Aerospace Shop Floor
For the purpose of this use-case, we will monitor the behavior of human workers in the handling of gap guns, using retrieved tool data, shop floor sensors and wearables. The behavioral data will feed the data lake and be analyzed as proposed in Section 4.1 to form models of normal worker behavior. Modeling of normal behavior of authorized user on the OT network will also be implemented based on logs and kernel data. It will support the generation of realistic legitimate traffic in CR environment reflecting the conditions of normal worker and network user behaviors. Here, we are building a multi-modal approach.

Co-Simulation of 3 Different Assets on 3 Different Sites
The IIoT connection of roboshave and autoclaves of Tablada and Cadiz plants will enable retrieving huge amounts of data reflecting normal operations. The co-simulation approach described in Section 4.1 will enable proper modeling of such interdependencies, representation in virtual space through DT technology. Data relating with production schedules, progress, quality measurements, power consumption or environmental conditions can be fed to virtual models and analyzed by AI techniques as a mean to understand relationships that would fall beyond human cognition limits.

Distributed Aerospace FoF Optimization
The factories of Sevillan area are involved in collaborative production processes, which require fine scheduling of activities across three sites. Production schedules can be revised if a particular incident on one site affects activities to be carried out in the other site. Gaps guns are mobile equipment which can be reassigned from one site to the other if needed. Thus, the operation of roboshave, autoclaves and gap guns is subject to interdependencies and optimal scheduling across three sites is possible. At state of the art, such flexibility is not exploited because the cost of assessing options and the risks related with it are prohibitive. Assessing optimization scenarios in virtual environment before implementing them in real life would thus be of undisputable benefit.

Distributed Aerospace FoF Resilience
A set of misuse cases has been defined with impacts on safety or security of the manufacturing line. The misuse cases will be demonstrated in virtual environment, to prevent perturbation of factory operations. It will enable to train anomaly and attack detection algorithms and test their ability to classify accidental or malicious incidents. The co-simulation will enable impact assessment across the complete manufacturing line and selection of optimal mitigation measures to be tested in virtual environment before implementation in real life, thus reducing the risk related with incident response.

Technologies
In this section, we provide a short description on which and how we are using different technologies to implement our concept. Note that this is work in progress and not Airbus tools already in use in their factories.

•
For the IIoT cyber-physical DT, we rely on Ditto technology, which is an open source tool already described in Section 3.2.

•
Flows from sensors are managed through Node-Red [150], a Node.js based relay application for wiring together hardware devices through the passing of messages. A hybrid multi-model approach is used for emotions recognition, based on Facial Action Coding System FACS [127] and facial emotion recognition, which we expect to compliment the biosignal-based emotion detection.
• Gazebo, a Linux based open source multi-purpose simulation tool specialized on the simulation of robotic systems, is used. • Airbus Cyberange for cyber threats and merging of physical and cyber security data and simulation is used. • ElasticSearch, a highly scalable NoSQL database with a powerful search, is used to store data from sensors, machines, the IT tools, network intrusion detectors, etc.

•
We use Kibana to visualize the data as embedding its dashboards in the user interface.

Conclusions
With this work, we provide the baseline of a comprehensive FoF modeling and simulation capacity. Building on state-of-the-art DTs and CRs, we identify and address some important limitations of the technology, which essentially relate with the lack of adapted human behavior modeling techniques and the need for co-simulation. With insights on innovations proposed in the frame of the CyberFactory#1 project, we demonstrate how filling these gaps may open new avenues for industrial applications to manufacturing process optimization and FoF resilience. Such applications appear of certain benefit in the light of the current pandemic crisis, which, without being of a radically new nature, reminds us how much our production systems are vulnerable and dependent on permanent physical presence of humans on factory shop floors. The vision of a FoF remotely monitored by the use of DTs must indeed come together with a corresponding ability to ensure resilience in similar conditions.
With the approach described in Section 4 and the implementation case proposed in Section 5, we demonstrate the added value of human behavior modeling and co-simulation capacities for enhanced optimization and resilience of smart manufacturing infrastructure. While we are progressing towards implementation of the above-described methodology to a broader range of use-cases, we will be able to assess the potential for generalization of the approach and define standards that can be deployed more widely. Eight industrial pilots factories belonging to transportation, textile, consumer electronics and automation sectors will implement the results of this study in the frame of demonstration campaigns to be held in the first semester of 2022.