Implementation of Cognitive Digital Twins in Connected and Agile Supply Networks – an Operational model

Supply chain agility and resilience are key factors for the success of manufacturing companies in their attempt to respond to dynamic changes. Circular economy, the need for optimized material flows, ad-hoc responses and personalization are some of the trends that require supply chains to become “cognitive”, i.e. able to predict trends and flexible enough in dynamic environments, ensuring optimized operational performance. Digital Twins (DTs) is a promising technology, and a lot of work is done on the factory level. In this paper, the concept of Cognitive Digital Twins (CDTs) and how they can be deployed in connected and agile supply chains is elaborated. The need for CDTs in the supply chain as well as the main CDT enablers and how they can be deployed under an operational model in agile networks is described. Emphasis is given on the modelling, cognition and governance aspects as well as on how a supply chain can be configured as a network of connected CDTs. Finally, a deployment methodology of the developed model into an example of a circular supply chain is proposed.


Introduction
Manufacturing is becoming global and distributed, making it necessary to operate complex networks of suppliers and logistics chains. In parallel, it is transforming into "local" by supporting collaborations with local manufacturers to address flexibility, resilience, personalization, and environmental reduction [1]. In this context, supply chains are moving from the traditional hierarchical structures to "Value Webs", characterized by complex, connected and interdependent relationships, where knowledge flows, learning, and collaboration are almost as important as more familiar product flows, controls and coordination [2]. This is recognized as key strategic goal by the Industry 4.0 paradigm, being expressed as "the transformation of existing manufacturing supply chain networks into more digitally connected and agile ones" [3].
The importance of connectivity and agility is justified by various factors. First, the need for shortened product lifecycles together with the growing product complexity, creates a pressure on realizing an effective and efficient product development [4]. The trend is to transform existing mass production models into smart products (by collecting information streams on their use and applying Artificial Intelligence (AI) services) [5], which are modular development process of such an CDT model, focusing, beyond that, on its application to empower connected agile supply networks. Section 2 outlines the approach, justification of CDTs in agile supply chains followed by the description of main CDT enablers and the operational framework. The cognition process together with the deployment methodology is explained and discussed in more detail in Section 3, while Section 4 presents an example of configuration and operation in a circular supply chain. Finally, Section 5 summarises the study's conclusions and future work.

Methodological and operational framework for CDTs
An investigation of the present usage and adoption of DTs by manufacturing companies shows that current approaches to the implementation of DTs in manufacturing lack a solid methodological framework. At the same time, there is no systematic approach in scientific literature that applies the concepts of CDTs in agile supply chain networks. Though some early adopters have demonstrated applications of DTs for manufacturing, current implementation limitations are:  Inadequate understanding of the connotations of CDT-driven manufacturing.  Focus mostly on operation and maintenance of production lines and not on the value chain.  Lack of application frameworks and reference models for CDTs [20].
The methodological approach, followed in this paper, for the definition of such framework is based on the following three steps: 1. Understanding the concept and functionality of a CDT. 2. Positioning the CDT into the manufacturing context and more particular in reference Industry 4.0 architectures using some reference scenarios. 3. Define the basic enablers for a CDT and explaining how they apply in the reference scenarios.

Understanding CDTs
Although there are many definitions of DT, it is important to recognize and make distinction of its functionality and offerings depending on the information integration between the DT and the physical entity [13]:  A Digital Model is a visualization of the object without any data flow from and to the object. In a Digital Model, simulations can be run without information exchange with the physical object.  A Digital Shadow is a Digital Model but with some information flow from the object to the model. In such case, a change in the status of the physical object updates the status of the digital model.  A Digital Twin has a bilateral integration with the physical object by both getting information from the object and controlling the physical object. A DT must combine an advanced data-acquisition system, information technology and network technologies to create a virtual, digital replica of a production system with various capabilities in manufacturing industry. Consequently, it must:  Create and manage a virtual model of the system.  Virtualize the behaviour of the physical entity.  Have different capabilities allowing to simulate, predict and optimize.  Connect virtual (model) entities with physical ones, updating itself in response to known changes to the entity's state condition or context. An advanced functionality of the DT is the cognition capability (CDT), i.e. a DT able to "reason" about the data and the states of the object and its behaviour [21].

CDTs positioning in the supply chain
DTs have an important role in the manufacturing and supply chain context. In agile supply chain networks, connectivity among all factory assets and supply chain entities is of utmost importance. The RAMI industry 4.0 architecture [10] emphasizes on how to connect different hierarchies of the factory from an individual asset up to connected processes, the enterprise level and the supply chain (connected enterprises). Such "assets" are orchestrated along the different product lifecycle phases (from development up to the product use and maintenance). Similarly, the Industrial Internet of Things Connectivity Framework is based on a hybrid interconnection in all product lifecycle phases and through on a vertical (field device up to Service system) and horizontal level (supply chain) [22]. The common denominator of both approaches is the need for modelling all production assets in a way that:  There is an up-to-date representation of the operational behaviour which is fed by streamed data from sensing devices or other supporting measurement systems.  All possible interactions with other production assets are known thus creating a virtual representation of the factory/ supply chain as a dynamic system.  Combining the behaviour and the networking of such asset we can monitor, simulate and optimize its performance. Already large enterprises have adopted the concept of DTs in different models and applications [23], [24]. Furthermore, different DT scenarios are introduced in the literature [21], [25]. The present attempt to elaborate on the CDT enablers and the operational model, refers to functionalities and particularities of a circular supply chain case, where stakeholders in rural areas are interacting in an attempt to optimize waste flows.

Enablers for CDTs in Agile supply chains
As introduced in the previous section a production entity (workstation, production line, Factory) or a supply chain is considered as a network of inter-connected DTs, each one having different capabilities. To realize this concept, each manufacturing context (factory, supply chain) can be modelled accordingly with different levels of cognition, communication and monitoring needs. Such configurability can be achieved by allowing the end user to configure and monitor the physical assets as DTs and associate them with the enablers presented in Figure 1.

Profile
The CDT profile is all about its status, behavioural model, specifications and associated models. It is the basic model of the CDT, which can be realized using a Knowledge Graphcentric framework [26] addressing: i) the underlying processes where the DT is part of; ii) ontologies supporting the semantics and syntax; iii) APIs with analytics and optimization services that support the whole cognition process (see section 3).

Cognition
Cognition is the ability to understand context, reason on top of existing information, predict and optimize behaviour. This ability comprehends all analytics and cognition services, and each DT can have different cognition capabilities (from simple simulations up to complex predictive models, anomaly detectors, and optimization). The basic blocks of cognition are the following:  Reasoning services, which are responsible for understanding a context and generating new knowledge-based on existing domain knowledge, past data, data streams from the physical entity, and insights obtained from simulation and prediction services, or optimization models and services.  Simulation and Prediction Services that propagate a DT's behaviour in the future to understand which future scenarios are most likely to happen and provide insight into whether an undesired outcome is about to take place. Prediction services can be used to learn from past anomalies and predict if new observed values are considered anomalous.  Optimization models and services, which allow finding valid and, in many cases, nearoptimal solutions, given a set of constraints.
To realize the cognition process, a CDT must embed the necessary analytics models and provide APIs so that external analytics services can consume the insights. According to IDC report [27] cognitive enabled supply chains can be a key to respond to key priorities such as: eliminate waste, improve supply chain traceability and predictability, improve service performance. This means the ability to understand, early detect and predict the impact of different types of behaviour observed [24]:  The Predictable Desired (PD), which is the desired (normal) behaviour of the system.  The Predictable Undesired (PU), which are problems that are expected but we cannot understand why they happen.  The Unpredictable Desired (UP), which is the "surprise" and benefits/good behaviour of the system that we did not expect to happen.  The Unpredictable Undesired (UU), which is a serious fact and relates to behaviour that we did not expect to happen, and we do not know why they happen.
In section 3, we will analyse in more detail the cognition process.

Lifecycle
This refers to the ability to monitor and control the DT behaviour through its entire lifecycle. As the DT illustrates the behaviour of a physical entity, the DT lifecycle can be considered from the Product Lifecycle view [28] [24], consisting of the following steps:  Creation, where the profile of a DT is defined. Different configurations are assessed andusing data streams -"asset's" behaviour model is identified. This is achieved through experimentations/ simulations and removing unpredicted and undesirable behaviours.  Production, where interfaces and created between the DT and physical asset and are ready to communicate each other.  Operation, where information between physical asset and DT is exchanged. There any change in the physical asset (e.g., parts replacement) updates the DT model and further information coming from the DT can predict anomalies or changes in the physical asset's behaviour. In principle, the operation is a continuous alignment of the physical and virtual operation.  Disposal or recycling. At the final stage, the full operation of a DT is considered as source of learning to avoid future mistakes in next modelled production systems. During the CDT lifecycle, the cognition models play a very important role since they model, calibrate and improve the CDT's behaviour. From the early phases of the CDT design (Create and Production) we need to experiment different scenarios and conclude on a particular desired behavioural model to be made. The desired behaviour also needs to be calibrated with first operational data, which validates and refines the initial assumptions. In the operational level we are dealing with information about the actual operation and possible deviations that reflect either failures or even areas of improvement. Finally, at the disposal level, we have a complete set of knowledge about the CDT's operation (associations, rules, behavioural aspects), which can be replicated in the design and operation of other CDTs thus minimizing the design time and cost. Recyclability of the CDTs in this case is about reusing the knowledge/ lessons learnt about the CDT in a particular context and applying it in a similar case. At first, we have the static CDTs, for example, in a production line inside a factory, a machine that has a long lifetime and is part of a fixed process (which behaviour can of course be upgraded but as an entity remains a long). In such a case, the production line is a network of such fixed CDTs.
We might also have the case of an ad-hoc and more dynamic CDT. This is a case where in existing production of supply chain networks we have for specific occasions the need to incorporate a physical asset in an operation. Let us take the example of collaborative logistics where a truck has a problem on the route and must find an alternative solution. if we consider the logistics operation as a distributed network of collaborative CDTs (Logistics Objects), the truck can identify a collaborator (i.e. another truck, which is in a close geographical range) that goes to the same direction, fulfils all the needs of delivery (conditions, capacity) and the governance/ collaboration rules. In such case, the CDTs of those trucks can identify through "social networks" such CDTs and "tweet" each other to negotiate a collaboration [29].
Another example is the case of customized products. For a particular feature requested by a customer, the manufacturer might need to assess different external asset providers (e.g. 3D printing), which can deliver the part to be integrated in the product. In such case, a DT of the candidate producer can be built in line with the CDT enablers so to assess the performance. The lifetime of this CDT can be:  Very short (creating and operating only in the planning phase: just examining whether to collaborate with a producer).  Time-limited in the case of a collaboration but in specific timeframe (e.g. only for a particular and seasonal batch of products).  Longer and lasting, in a case of more permanent collaboration with the producer.

Computation
Power computation is recognized by the Industrial Internet as a core element that enables the equipment's ability for failure prediction, energy analysis optimization, predictive maintenance and other applications [22]. In a CDT context, this is translated into its ability to perform computations (considering the load of functions and calculations need to perform) either or cloud and/or edge, depending on the CDT deployment and operation.

Communication
Is the ability of a CDT to communicate with its physical asset and with other CDTs as part of the network it belongs (e.g. Workstation A with Workstation B, which belong to the same production line). Communication services are supported through a messaging layer which is responsible for the interoperation and information exchange. Through the communication services we can model and operate networks of CDT, for instance:  A production line: Network of interconnected machine CDTs.  A factory: Network of interconnected operational entities that illustrate the main manufacturing operations in the entire value chain.  A supply chain network: A supply chain network will be a network of interconnected CDTs each one representing the entities participating in the whole network (factories, suppliers, customers, warehouses, etc.).

Visualizations
Refers to the ability to monitor the performance and lifecycle of a DT (using dashboards, VR, MR and XR technologies). Visualization services are collecting the processed by the CDT information. Through an API such information can be visualized either in dashboards or XR tools.

Trustworthiness
This is a combination of all technologies, models and applicable regulations (e.g. GDPR) to ensure that data are generated, transmitted, and received among CDTs under a holistic trustworthy framework [30]. This includes data security, traceability, transparency, confidentiality, integrity and access control policies.

Governance
According to the Institute of Governance in Canada [31] there are three main questions to start understanding and structuring a governance framework:  who has a voice in decision making?  how are decisions made?  Who is accountable?
To propagate in the CDT concept, such questions are very important since CDTs are linked with autonomous behaviour capabilities (cognition). Therefore, are aspects on how information is processed, and decisions are taken from business, operational and ethics point of view are quite important to agreed and incorporated in the design and implementation of CDTs in different supply chain contexts. For example, in smart cities an interesting work on governance is made on modelling Cambridge city as a DT [32]. In connected supply chains, we have CDTs of Assets that belong to different owners (as entities) and we need to ensure accountability together with the right decision-making procedures, norms, and rules, so to allow a synchronized decision making, depending on the case occurred.
In the real (physical) world, information is shared, processed and decisions are taken considering some business/operational rules or policies and in compliance with the applicable regulatory framework. In the same manner, in the virtual world, the DT needs to be aware of those rules or even that reflect the priorities, policies and other terms of collaboration under which, this DT operates. Such rules can be:  Data sovereignty and governance: Introduced by the Industrial Data Space [33], this refers to the rules of data management (ownership, terms of conditions and use).  Prioritization of criteria in a context of decision-making (optimization): Imagine a 3d-printingas-a-service company, which gets an order from a "big customer". In the decision on the delivery date (by combining other orders and reschedule its production plan), if there is a strict Service Level Agreement (SLA) of 24h delivery for the customer, this changes the priorities and gives a higher level of important to this particular order.  Legislation/Regulation: Let us assume the case of materials logistics using drones or other autonomous vehicles (According to McKinsey [34], semi-autonomous delivery is expected to be a trend from the near terms in the beginning of the 2020-2030 decade). In such a case, there are specific regulations (with regards where to fly, etc.), which must be modelled and enforced in the operation of the CDTs. Such rules are very important and special attention has to be given to avoid liability issues.  Reputation and past-experience knowledge: considering again the case of a contractor manufacturer (3D printing), the past-experience creates knowledge and some predicted rules of collaboration behaviour. If the manufacturer's supplier has not proved trustworthy in the terms (e.g. delivery dates) then the 3D printing company faces some risks in delivering the product according to the SLA. Reputation mechanisms and records from previous transactions can form those rules to be considered in the decision-making process.

Liability and compliance with legislation/regulation and ethics
DTs are a virtual entity and a question raised is whether liability falls on it should also be accompanied with the necessary of information sharing and processing using AI [35]. DTs are bound with the operation of the physical world and in some cases, there are various issues of liability is well recognized in the case of autonomous vehicles with the famous social dilemma on where an autonomous car should move to avoid the death of the driver or a passenger [36]. In the manufacturing and supply chain, this is not always the case and depends on the context and nature of CDT operation. In the modelling phase of the supply chain the role of each CDT must be defined along with the information to be processed and decision-making autonomy. Then, liability will be defined in accordance with the terms of collaboration and the applicable regulatory framework.

The need for a code of fair operation
It depends on the complexity and nature of operations the CDT is to perform and on the degree of autonomous behaviour of the actor. In a context of agile supply chains, an example can be in the materials logistics. In such case, a truck receives (through its CDT) a request for an ad-hoc delivery. The CDT can then proceed on the decision about the actions to be done or propose a plan to the person monitoring the CDT behaviour (driver or operational support whoever is the decision-maker) and the latter confirms the action [29]. In a fully autonomous behaviour, a CDT might decide to abandon a particular request with consequences on the future collaboration with the customer. Companies need to think of a code of fair operation in line with its organization values and principles and model them as rules to be enforced in all phases of CDT operation.

CDT Governance approach
Our approach on CDT governance will be based on a similar deployment approach used to model DTs in the city of Cambridge [32]. The main aspects to consider are:  Define the context of operation (in which factory/supply chain hierarchical structure does the CDT belong?)  Identify relevant stakeholders, CDTs and needs (physical asset, other involved assets/CDTs): Include appropriate external stakeholders where needed (e.g. customers who give input on the design aspects of a product manufactured by a particular production line CDT).  Define interactions among stakeholders/CDTs: Clearly identify inputs/ outputs, information sources and needs.  Define roles per stakeholder/ CDT: Who does what? Which are the decision-making power and authorizations? Is a particular CDT able to autonomously make decisions and initiate actions?  Agree on liability and legislation issues: Agree on the ownership and liability of any malfunction of the CDT. Define any applicable legislation rule that affects the operation of the CDT.  Define rules of collaboration, such as: Who owns data? Which are the norms/rules of collaboration/ information exchange/ processing and decision making? Which are the applicable information authorizations (access control policies, etc.)?  Define required AI models: Agree on valuable insights that can be provided by forecasting future outcomes based on past data, or anomalies learnt based on past data. Explanations should be served along with those insights, to understand models' rationale behind a forecast or detected anomaly.  Procedures/ workflows: Given an information input and knowledge acquired (cognition) what is the process to be followed? How can cognition improve this process?
3 The Cognition Process: a reference operational framework model for CDTs

The model
The above enablers are functioning together in an integrated concept, where for each of the CDTs we can monitoring the flow of information from collection to understanding and behavioural alerting as indicated in the figure below: The main steps are: Step #1: Configuration phase / Modelling: Here, we model a factory or any hierarchical structure of a factory/ supply chain as a network of interconnected CDTs along with interactions and information to be exchanged. Step #2: Collect data streams about the behaviour of the CDT: Each CDT will collect information about its behaviour from sensors, systems, other CDTs or external sources (standards, databases, etc.).

Applicable Enablers:
 Communication: Communication capabilities of the CDT that defines the sources of the data streams (either from the physical assets and/or from other CDTs that represent different information sources).  Computation: refers to basic calculations and transformations that happen either at the cloud and/or edge.  Trustworthiness: refers to the applicable security/privacy/trust services and policies that apply in the process of collecting info from other physical asset/CDTs.  Visualization: Data streams visualization.  Lifecycle: Monitor status of the CDT (modelled in step #1).  Governance: Apply governance policies.
Step #3: Understand Context: Once data is collected it is mapped against the CDT's behavioural model. Through cognition services, the CDT understands potential trends, anomalies or creates new knowledge (in the form of new rules, associations, etc.) that is not yet known.

Applicable Enablers:
 Cognition: Analytics based on existing behaviour models or data-driven knowledge extraction that updates the existing model.  Computation: Refers to whether cognition services will run at the cloud and/or edge.  Trustworthiness: Refers to the applicable security/privacy/trust services and policies that apply when processing info from other physical asset/CDTs.  Visualization: Knowledge visualizations.  Lifecycle: Monitor status of the CDT (modelled in step #1).  Governance: Apply governance policies.
Step #4: Simulate and Prediction (S&P): Once an incident, trend or anomaly is identified (in the Understand and knowledge generation phase), S&P should allow simulation of the behaviour of the CDT in the future and predict potential failures in the future. Simulation is performed based on root-cause analysis and using the existing behaviour model (propagating the behaviour of the system in the future using the new data).

Applicable Enablers:
 Cognition: Simulation and prediction services propagating the system's behaviour with the new knowledge in the near future and identify potential anomalies.  Computation: Refers to whether cognition services will run at the cloud and/or edge.  Trustworthiness: refers to the applicable security/privacy/trust services and policies that apply when processing info from other physical asset/CDTs.  Visualization: Simulation and prediction visualizations.  Lifecycle: Monitor status of the CDT (modelled in step #1).  Governance: Apply governance policies.
Step #5: Decisions (optimization): After simulating and predicting the CDT's behaviour, robust optimization services will offer suggestions for improvements. Optimization services will propose a new state of the CDT's behaviour, which has to be validated using the simulation and prediction services. This feedback loop will consider the new CDT's behaviour inputs, simulate and predict its behaviour in the system and assess the performance (is the problem solved? Is the trend fixed? Other?). if the solution is not validated, then optimization services have to run again and the feedback process continues.

Applicable Enablers:
 Cognition: Robust optimization services to identify new behaviour parameters. Simulation and prediction services propagating the new (proposed) system's behaviour in the near future and identify potential anomalies.  Computation: Refers to whether cognition services will run at the cloud and/or edge.  Trustworthiness: Refers to the applicable security/privacy/trust services and policies that apply when processing info from other physical asset/CDTs.  Visualization: Behaviour visualization.  Lifecycle: Monitor status of the CDT (modelled in step #1).  Governance: Apply governance policies.
Step #6: Actuation: Once the optimized solution is validated, the actuation services will create the necessary messages to the physical asset in order to alert the behaviour accordingly. The usage of CDTs in connected factories and supply chains is very important. Through CDTs, a manufacturing company can virtualize its assets and have a better monitoring of their performance both at factory and inter-factory level. They can also improve Production Planning and Predictive Maintenance [15], monitor virtual production lines by connecting all involved stakeholders [16] and optimize Packaging, Materials and Logistics [17]. Similar to the concept that a factory is a network of interconnected CDTs, a vision for agile and connected supply chains is illustrated in Figure 3 and includes the following:  A dynamic, living system of "cognitive digital twins" representing all assets, operations and actors involved: factory, logistics service provider (LSP), trucks, warehouses, etc. Each entity participating in the supply chain can be modelled as a CDT. Deploying the enablers described above, each CDT can share only the necessary information with other CDTs and agree on governance aspects and rules applicable in all communications and decision-making process.  Interconnected CDTs at intra-and inter-factory (supply chain) level. This implies the definition of the information/events to be shared under a common security/ privacy and data integrity framework.  Different levels of cognition. Depending on the level of autonomy (defined in the governance framework for each of the CDTs) different cognition capabilities will apply. Those will vary from basic understanding to autonomous decision making and actuation.

Deployment methodology
The deployment methodology follows the principles of the CDT lifecycle described in section 2.3.3. This is the first phase where we define the case for monitoring and improvement of the agile supply chain. We need to analyse the challenges, problems and areas of improvement we expect to address with the CDT model. The basic questions to answer are:  What is the overall operational flow of my supply chain?  Why is it critical to monitor the supply chain? We need to justify our focus with KPIs and other quantitative/qualitative metrics.  Define the core stakeholders: Who are the ones that contribute most to the challenge/ problem defined above? here we need to define our system boundaries (either we focus on a specific supplier tier or even select the most important in multiple tiers).  Define and select the process where to focus: Depending on the challenge, we need to identify the critical process(es), which will be the focus for modelling and operation.

System Modelling
Once we identify the problem and the case, we need to understand how the collaborative supply chain processes work, where the needs for CDTs are and understand the different capabilities/ needs in relation to the CDT enablers described in section 2.3. After that, we will be able to configure the cognition operational model process (presented in Figure 2).
More particularly, the things to address are:  Operational/process modelling: Create the process workflows and map the stakeholders, roles, inputs and outputs.  Define the needs for CDTs: Depending on the challenge, the workflows we need to understand which asset/process or even entity in the supply chain has to be modelled as CDT. We might come with a 1-to-1 relationship between CDT and entity or a group of inter-connected CDTs.  Elaborate on the CDT enablers: For each of the CDT identified, we need to understand how the enablers apply. This is a time-consuming issue since all actors have to agree on the information to be monitored, governance issues, cognition levels of autonomy per CDT and other parameters.

Understand and model information needs:
The focus is on the information to be exchanged/ collected which will be further used in the analytics/ cognition models.  Deploy and train the necessary cognition/analytics models: Different strategies are required to train different types of models. To ensure the quality of the service exposing them, special attention must be put on monitoring concept drift, and models' performance over time.


Deploy the necessary optimization algorithms for decision making.
In this phase we might result in some improvements on the scope and prioritization of the processes/ actors.

Design and Operation
After the modelling phase, we move to the design and operation which is the actual implementation of the CDTs operation. Here, we create all inter-connectivity services (data collectors, data exchange services, integration with existing stakeholder's backend systems) and necessary cognition/ optimization services and visualizations. Initial deployment is done at the different stakeholders and a first operational trial is done for testing.
The trial phase can be on selected CDTs and a specific scenario. The main goal phase is to refine and fix the models and enablers (configured in the modelling phase) and the overall CDT performance.

Rollout
In the rollout we extend the operation to the rest CDTs and scenarios in the operational model (defined int eh scope definition). It is a continuous and scalable process where we can extend the scope of the implementation thus redefining and re-implementing the deployment model.

Deploying the model into circular supply chain -the case of circular connected ecosystem
The realization of the above operational framework can be illustrated in an example of a circular connected supply chain where different actors interact together to collect, forward and process organic waste from the farm, motorhomes (RVs) to the recycling processing factory. Our focus will be on the configuration aspects (modelling, communication, governance) and cognition aspects. The rest enablers (visualizations, computation, trustworthiness, and lifecycle) depend on the case context, IT /infrastructure maturity, level of trust and other factors, which need to be analysed per case.
We will start with the overall description of the case (section 4.1); then we will present potential scenarios of dynamic behaviour of the supply chain (section 4.2); finally, we describe how the operational model works for those scenarios (section 4.3) in line with the steps described in Figure 2.

Overall case description
The case is about circular rural areas where we connect all potential organic waste providers with a dynamic logistics system and the factory, which process it and returns it back to the rural community. In such a model we can connect different rural areas through an ecosystem aiming at reducing CO2 emissions and minimum waste in the environment.  The request for waste collection can happen through the following ways:

Collection from farmers
Farmers monitor information about their feedstock. Organic waste for example is modelled as CDT with the following parameters: a) Type of waste; b) Quality characteristics; c) Quantity. Some of those parameters can be monitored with sensors and others manually. A request for collection by the farmer will be sent to the system. Such request will be forwarded to the factory and the LSPs (responsible collection partners) with whom the farmer or factory has a contract.

Collection from motorhomes (RVs)
If the waste tank of a RV is almost full (this can be understood either from an embedded sensor or the driver inserts this information thought a mobile app) the RV CDT propagates the information about its location, feedstock type/quality characteristic and quantity and requests a suggestion for deposit or collection.
Local LSPs or bio-processing factories are informed about the availability of waste to be collected.

Deposit feedstock waste in the bin
Following the above collection cases, the feedstock provider (farmer or RV) might be suggested to deposit the waste in a bin that is close to its location.

Scenario #2: Optimized logistics operations
LSPs/Feedstock collection actors need to organize their resources and operations in a more flexible and predicted manner. This means the following:  Predictions of feedstock availability based on experience and historical data: In such case, the LSP can organize better the collection points. Those can be for example, deposit bins or mobile processing units where possible and/or daily positioning the network of trucks in specific locations to respond quicker to a particular need.  Optimal planning (or re-planning to satisfy the different requests): (re-) Planning can have the following forms: a) routing (finding the optimal route) or b) find the truck that (based on its location and existing route) can fulfil the particular order. Further to this, optimization will deal with the problem "Where to find the nearest actor to deposit my waste of someone to collect it".

Scenario #3: Waste incoming handling from the factory
The factory is aware in real-time of requests from feedstock providers about quantities to be received and can make a better planning of the incoming materials handling process and further create an optimal schedule of the process.
Improving scheduling of the incoming materials will improve cost-effective storage and better quality of the by-products to be processed (depending on their characteristics: for e.g. some by-products need to be processed immediately while others can wait some days).

Stakeholders, roles and applicable CDTs
Following the above context and scenarios, we have the following stakeholders and their roles in the supply chain:

. CDTs liability, Collaboration and Governance
For each of the CDT there are different possible configurations with regards governance, cognition and the rest enablers. This depends on the level of control of CDT by its owner. The table below illustrates the main CDTs interactions in terms of information to be exchanged, liability, cognition and autonomy levels. Every actor in the supply chain is modelled along with its assets as a CDT (see table 3) and the whole value chain is modelled as a network of all stakeholders' CDTs. Each stakeholder has different levels of interaction with others and with the necessary information to be exchanged (e.g. a farmer can send information about waste availability and the receiver is both the LSP and the factory).
A CDT is also always aware of its status ("where am I", "how much waste quantity I have", "waste quality characteristics", etc.). Such information is a result of data streams coming from sensors or existing systems (real-time information or manually inserted using a mobile application) communicated from the "physical waste" to its CDT representation.
The CDT broadcasts the public information to the rest CDTs in the network in a secure way. For example, a motorhome publishes its specific information about its status (location, waste type, etc.) and searches the nearest and more relevant (for the waste type) bin for deposit.
The CDT constantly communicates with its fellow CDTs and tries in a collaborative way to effectively collect the waste in the most optimal way.

Understand Context (Step #3 in the operational model)
Data broadcasted by the CDT (see 4.4.1) reflect the current state of the physical counterpart in all modelled aspects. Virtual mapping procedures can be used to map data to a knowledge graph, considering semantic meaning of data, and known relationships. Reasoning can be then used to create new deductive knowledge based on the one encoded in the knowledge graph. Anomaly and concept drift detection models can be applied on incoming streams to detect irregular context and decide whether forecasts can be trusted, or existing models should be updated.
To ensure trustworthiness, it is important to inform how new deductive knowledge was acquired. If probabilistic soft logic is used for link prediction, probabilities should be informed along with the relationships. For anomaly and concept drift detection models it is desired to inform reasons why certain data is considered anomalous, or how strong the concept drift is, which may provide additional insights when applying heuristics to automate decisions, or to guide users' decision-making.
Visualizations play an important role. By conveying acquired information in a simple and understandable way, they allow to monitor the status of the CDT, their evolution and provide ground for decision-making regarding simulation and prediction models based on mirrored data.

Simulate and Prediction (step #4 in the operational model):
Simulation and prediction models mostly provide insights into normal operating conditions. Data-based models can be used to learn from past anomalous situations and recognize new ones while also explaining the reasons to do so. Once an anomalous situation is identified, simulations can be run to predict expected outcomes. Such insights can provide high value to end-users assisting them in decision-making.
Visualizations are of utmost importance since they can convey much information regarding future scenarios simply and understandably and providing different insights based on the end-user role. They should provide information regarding typical operational scenarios, how anomalous situations differ, and eventually highlight the top reasons driving those forecasts. Through these visualizations, the user can understand the magnitude of the changes or anomalies observed, expected outcomes, and weight reasons considered by the models to decide if they should be trusted or additional considerations are required.

Decisions (optimization) (step #5 of the operational model)
In each of the scenarios we have different decisions, which correspond to suggestions by the CDTs:  Farmers to get a proposal about which is the optimal way to deposit the waste: either finding the nearest available bin/mobile processing unit or send a request to an LSP because a truck is nearby and is available to pick the waste.  The nearest truck CDT gets a request about a waste to be collected. Given its existing route/ collection plan, the CDT gets the request and processing it and decisions can be: The CDT collects the plan with a new optimized delivery plan or The CDT communicates with other truck CDTs and "negotiate" the collection from others.  The RV gets a recommendation of the optimal way to deposit the waste: a nearby bin CDT or a truck CDT.  The factory CDT gets optimal recommendations for production scheduling, material handling based on the real-time information and forecasts about the incoming materials.  In a more strategic view, the LSP can organize better the collection points. Those can be for example: deposit bins or mobile processing units where possible) and/or position the network of trucks in specific locations to respond quicker to a particular need. Based on the supply chain behaviour of how waste is collected, how often and from which location, the LSP can assess different scenarios and find an optimal network that can satisfy ad-hoc requests and with the minimum respond time and cost. 4.4.5. Actuation (step #6 in the operational model) Actuation depends on the level of autonomy defined in the governance model (in the configuration step). We can have the following:  Manual actuation from the user (driver, farmer, etc., which means no CDT actuation).  Semi-manual actuation: the CDT gets the approval by the end user about what to do and the CDT performs the action.  Automatic actuation: this can be for decisions that the CDT is able to perform and act.

Conclusions
The proposed operational framework addresses various enablers and is flexible to different configurations depending on the supply chain context, particularities, and needs for improvement. Some examples can be aligned predictive maintenance/ production scheduling at supply chain level; demand forecasts and synchronized planning [37]; Merging deliveries/ on the fly collaborations in response to ad-hoc events/requests; collaborative risk management.
Furthermore, it addresses the need for resilience: how to respond to different cases and situations given the COVID outbreak and other disrupting factors in the supply chain. Modelling the supply chains as network of CDTs and knowing its behaviour, we can assess different set ups, alternative approaches due to a sudden case, new emerging technologies (such as drones and Automated Vehicles in logistics) and new models (for example: localized manufacturing, extended network of distributors, etc.).
In all cases, CDTs offer the ability to simulate different scenarios, understand failures and trends, predict impact, and assess different optimization scenarios. To do this, supply chains need to collaborate and agree on a very detailed configuration scheme addressing not only the cognition process, but also all enablers that supports the information exchange, processing and actuation. This is an exercise that takes time and requires trust and transparency among stakeholders. IT alignment is a challenge, but the most important one is change management: the ability to re-engineer both intra-and inter-organization processes involving people, systems, reinforcing collaborative culture, support decentralizing decisionmaking and a mindset towards adaptation and improvement.
Future work on this topic will be on measuring the effectiveness of the model deployment in a supply chain context with quantitative KPIs (e.g. configuration times, performance issues and levels of improvement). In parallel, using a qualitative analysis we will try to assess more "soft" aspects such as people satisfaction and improvement as well as increase of trust and collaboration among the different stakeholders.