1. Introduction
Can manufacturing companies accurately determine or even forecast the percentage of logistics in the production of a product? What proportion of the production process is taken up by logistics activities, in terms of both production lead times and manufacturing costs? Can a clear boundary be drawn between logistics and production, and to what extent is logistics separated within manufacturing? Value creation cannot take place without the logistics processes that support production [
1]. Material handling and storage tasks are necessary non-value-added (NNVA) activities (according to the lean approach), so the goal is to optimise them. However, production support for logistics processes can be developed only through investigation and exploration of the details of the system and its processes. Companies, therefore, use a variety of methods (e.g., value stream mapping, VSM [
2]) to examine these phenomena.
Today, the role of digitalisation has become increasingly important for manufacturing companies [
3,
4]. With the help of digital technologies, companies can map their entire processes. In this way, the aforementioned precise time-to-cost ratios can also be determined in principle.
The issue of digital transformation strategy is one of the most researched topics in the manufacturing industry [
3]. Nowadays, this process faces several obstacles [
4].
In manufacturing, it is crucial to promote the effective integration of current technological innovations to ensure that companies have access to high-quality solutions for implementing digitalisation [
3]. The fundamental task of production logistics is to support and serve the production system [
1]. Planning and management of the production system have developed more than production logistics over the last decade [
1]. For this reason, this research focuses on defining the role and place of production logistics in digital transformation. To this end, it is essential to analyse the literature on corporate digitalisation and digital twin technology, and a key objective is to examine the current state. In addition, special attention is paid to logistics-oriented supporting processes within production systems.
Based on this, the following research questions were formulated:
How can production logistics processes be successfully integrated into digital twins? What factors hinder or impede digitalisation?
Is there an existing method that is essential for the digital representation of a process? How well does it fit with the various processes involved in production logistics? Can its suitability be improved?
The
Section 1 of the study presents the literature review and, as a result, introduces a novel classification of production logistics digitalisation problems. The causes of the difficulties were explored along these problem categories.
Section 3 presents a problem tree to visualise the cause-and-effect chains and synthesise the connections among the main causes. Furthermore,
Section 4 presents a more sophisticated process mapping approach that can help identify the main logistics activities in the production process. In this way, identifying production logistics tasks becomes easier during the digital transformation process, and the transparency of the value-creation process increases, providing a stable foundation for developing digital models and later twins.
2. Literature Analysis
Knowledge of the main Industry 4.0 components and their potential is widespread [
4]. Nevertheless, many manufacturing companies have been unable to implement these technologies successfully in their operations [
4]. Identifying the reasons for this is an important research task that can help us better understand how to support manufacturing companies with digitalisation. In the literature search, selected articles were used from ResearchGate, Google Scholar, and ScienceDirect. This research focuses on production-supporting logistics processes; therefore, it is necessary to examine the emergence of production logistics processes in relevant publications. Thus, the keyword search focused on “digitalisation”, “production logistics” and their synonyms. The research began by mapping the appearance of production logistics (PL) in digital twin technologies. First, production logistics and digital twins will be briefly introduced.
PL lies at the intersection of manufacturing and logistics. It is a complex network system [
5] responsible for resource management and the planning and control of material and information flows [
6]. PL is located between procurement (supply) and sales (distribution) in the operation of companies [
7]. It includes all activities related to supplying production, and the movement of products [
1]. This material flow includes not only transportation, storage, and handling, but also the processing operations that occur during production, making production and logistics activities inseparable in some cases. PL’s activity-oriented processes are material handling, inventory management, packaging, and order processing [
7].
2.1. Selection of Research Papers and Other References
During the research, the appearance of PL in digital twins was examined. What is a digital twin? There are several definitions of digital twins. Some authors focus on simulation, while others divide digital twins into three or three-plus-two main parts, in which digital and physical objects are already connected [
8]. Fuller et al. [
9] addressed the differences in interpretation, resulting in the triad of digital model (DM), digital shadow (DS), and digital twin (DT). “A digital model is described as a digital version of a pre-existing or planned physical object, to correctly define a digital model there is to be no automatic data exchange between the physical model and digital model… A digital shadow is a digital representation of an object that has a one-way flow between the physical and digital object. A change in the state of the physical object leads to a change in the digital object and not vice versus… If the data flows between an existing physical object and a digital object, and they are fully integrated in both directions, this constitutes the reference ‘Digital Twin’. A change made to the physical object automatically leads to a change in the digital object and vice versa” [
9].
Google Scholar’s specialised search engine returned approximately 290,000 results for the term “digital twin.” This is the number of sources that mention digital twin, even if only in passing. While the search for “digital twin” in the title returned 29,500 results, many of these are studies were found across multiple websites. With additional filtering steps, by including the term “digital twin” in the title and adding the words “manufacturing”, “production,” or other synonyms, there are more than 1900 hits (in 2024, this number was approximately 1300). The first study meeting these criteria was published in 2015. The number of articles on this topic continues to grow. If the word “logistics” is added to the title, there are a total of 37 hits, the earliest of which is from 2020.
Figure 1 shows the literature selection process (PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) flow diagram).
A large amount of information is available on digitalisation in the manufacturing industry due to the literature. However, in practice, there are only a few examples of companies that perform exceptionally well digitally, and even using digital twin technology (it is challenging to find cases that demonstrate the full interaction between virtual and physical twins [
10]).
Figure 2 shows the process of literature analysis.
The relevance of production logistics in these solutions is very low. The studied literature can be classified into three categories based on its content.
General summaries covering the advantages and application possibilities of digital twins [
8,
11,
12].
Presentation of a framework with technologies [
13]. In one example, one layer of the four-layer architecture of the digital twin is the real-time mapping layer, which provides a real-time reflection of the physical system. The authors discuss production, warehousing, and material handling entities, that are continuously monitored using IoT (Internet of Things) devices and sensors to determine their status [
14].
Presentation of a specific use-case, programme or application using digital twin technology [
15,
16]. An example of this is a resource-optimisation problem for allocating tasks among three types of material handling machines [
17].
In total, 20 of the 26 selected articles present an application related to digital twins in production logistics. These articles do not mention how to achieve a digital twin of the entire manufacturing system, including manufacturing logistics processes. This suggests that the resulting digital models, shadows, and twins are operated as separate “island” digital systems, or that only 1–2 specific processes have been digitally represented (e.g., AGV (automated guided vehicle) resource allocation).
In most articles, digital twinning does not appear as an independent goal to be achieved, but rather as a tool that can be used to improve processes, decision-making, forecasting and resource allocation problems. These articles do not mention digital twin creation and its steps, or do so only in general terms (technologies, IoT, architecture or framework). The articles also differ in their digital twin approaches and frameworks (e.g., layer segmentation, number of layers, computing). The DT architecture used by Thürer et al. consists of three layers: physical entity, communication layer (Arduino Mega 2560), and virtual space created in Arduino [
18]. Hauge et al. divide the digital part of the PL-DT system architecture into two main parts: a data components group and a visualisation components group. Data components collect information from physical twins and include an application layer (Node-RED) and the Kafka data streaming bus, which ensures information flow and message exchange [
19]. Guerreiro et al. use a five-level hierarchical architecture (containing six layers) that includes all technologies related to data processing. This includes data collection, data storage, data processing engines, data querying and analytics at one level, and finally the data visualisation layer at the top [
20]. The synchronised information sharing reference model used by Guo et al. is divided into three main parts: the physical part, the digital part and the mobile gateway operation system, and distinguishes between the object level (resources), product level (processes), and system level (operations) [
21]. Meanwhile, Zhang et al. use the system level, subsystem level, unite level, and resource level [
22].
It is common to see a three-way segmentation of PL systems (transportation, production, and storage systems) [
14]. Of the 26 articles processed, 10 clearly address production logistics as tasks that support production. These articles generally use the term “smart production logistics” and clearly use data from digital twins for production logistics optimisation tasks [
23,
24]. Seven articles (mostly the literature on digital twin-based synchronisation of systems and social production logistics systems) focus on the delimitation of logistics and production, as well as on a subsystem approach [
5,
14,
21,
24,
25,
26,
27].
Reverse logistics and waste management are not mentioned in the articles, and the articles did not mention the need to explore processes and understand how they work. For this reason, it is important to search for, generalise, and synthesise the literature in the PL digital twin creation. Ref. [
28] focuses on digital model creation and ref. [
20] summarises technologies for digital twin creation, ref. [
18] defines the intermediate steps between digital shadow and digital twin creation, ref. [
29] presents the intralogistics digital twin framework, and ref. [
30] summarises data models. The goal is to produce a document that outlines the knowledge necessary for digital twin creation, primarily in general terms for production logistics, which can then be supplemented with specific aspects such as scheduling, production logistics synchronisation, etc.
The following have been identified as possible research gaps according to the articles:
Examining the complexity of production logistics processes and systems;
Examination of standards, creation steps and technologies in DM, DS, and DT creation;
The blurring of simulations, digital shadows and digital twins.
The question that needs to be answered is why production logistics is so poorly represented/underrepresented in digitalisation solutions. The solution was based on the previously mentioned selected Google Scholar search results, summarised in
Appendix A (
Table A1).
2.2. The Difficulties of Digitising Production Logistics
The classification of difficulties began with consolidating them across multiple articles and eliminating redundancies. This reduced the number of difficulties collected from 70 to 30 separable difficulties. Subsequently, the difficulties were first grouped into six groups: technological integration difficulties; synchronisation, data management, and real-time management; characteristics of production logistics systems and processes; the relationship between manufacturing and production logistics; handling disturbances and uncertainties; and methodological (“lack”) problems. Later, the first two and the second two groups were merged. To systematise the difficulties mentioned in the research articles examined, the following main categories were defined:
G1: technological integration challenges;
synchronisation and data management, hidden real-time operation;
G2: characteristics of the production logistics system and its processes;
the complex relationship between manufacturing and production logistics;
G3: handling disturbances and uncertainties;
G4: methodological problems.
Figure 3 shows the overlaps between the difficulties identified by the sources by category, where group G1 is orange, group G2 is green, group G3 is blue, and purple indicates group G4. The size of the sets refers to the distribution of the groups. In the sets and intersections, the sum of references belonging to that set (other than 0) is indicated.
Figure 3 indicates that most of the examined (21) articles report multiple difficulties across different groups. Of the literature reviewed, 19 mention the challenge of integrating digitisation technology (G1), and 19 mention difficulties arising from the characteristics of production logistics (G2). The group of difficulties arising from uncertainties (G3) occurs least frequently (mentioned in 15 articles).
Figure 3 shows that group G3 appears 14 times alongside G2.
2.3. Technologies of Digitalisation
It is interesting to note that the literature reviewed did not mention standards (only [
18]—ISO 23247 [
40]), and half of the literature did not detail the specific steps, software and hardware for creating a digital twin.
Table 1 lists the mentioned technologies and standards.
In the literature reviewed, the focus was primarily on the emergence and mention of difficulties. Difficulties can be categorised not only by their content, but also by their specificity and depth. It is important to distinguish between challenges that were discovered during the examination of a specific production logistics task (e.g., kitting) and those that are mentioned in general by authors, such as complexity and dynamics.
3. Cause-and-Effect Analysis
The aim was to establish a transparent causal relationship between the difficulties. To this end, the general difficulties were sorted separately, and graphs/causal diagrams were drawn by identifying connections and causal links as they emerged, even when they could not be clearly linked to the original graph.
Figure 4 shows the steps of the cause-and-effect analysis.
Figure 5 shows the direct and indirect connections identified in the literature. The colour of the graph vertices corresponds to the difficulty category: orange for group G1, green for group G2, blue for group G3, and purple for group G4.
The initial graph (primary difficulties) contained the four difficulties identified by Pan et al. [
31] and the relationships between them: The complexity of production logistics and the increasing demand for customised products cause dynamics within and outside the system, which are responsible for subsequent uncertainties (such as jams, stops, delays, and costs). These four difficulties have been mentioned many times in the literature, but ref. [
31] described their connections too. Zhao et al. [
32] identified the following reasons for the complexity and size of production logistics: large work areas, difficulties in monitoring resources and statuses, long operation periods, and the need for intensive human intervention. Several reasons were given for the difficulty of resource monitoring: the complexity and challenges of all-weather, real-time mapping; frequent resource interactions; and the high mobility and randomness of resource characteristics [
32]. Zhang et al. [
25] attributed the complexity of production logistics to the multiple subsystems it contains.
The dynamics of production logistics cannot be eliminated due to the constantly changing production environment [
22]. However, with the help of appropriate dynamics discrimination and coping and control mechanisms, the number of uncertainties caused by them can be reduced and managed [
31]. Zhao et al. [
32] explained in detail that unstructured data causes difficulties in data analysis. Combined with inadequate decision support methods, this can make it difficult to perceive dynamics and may also cause problems in responding to them inappropriately.
All four difficulty groups (G1…G4) are shown in the initial graph. However, the difficulties related to the synchronisation of production and logistics and the difficulties of technological integration are also illustrated in separate figures (
Figure 6 and
Figure 7).
A graph dealing with production logistics synchronisation was included because some of the literature did not deal explicitly with the digital twinning of production logistics, but treated production and logistics separately. This resulted in a graph with fewer vertices (
Figure 6).
A graph focusing on logistics as a manufacturing subsystem revealed difficulties in integrating production and logistics. According to Guo et al. [
21], this is due to inadequate/incomplete information sharing between logistics and manufacturing, caused by treating the two systems as completely separate and a lack of synchronisation. Technology alone cannot solve this problem, so new management innovations, principles, and methodologies need to be developed [
21].
Specifically, the difficulties encountered in the digital twin creation of production logistics are only mentioned in [
18,
19,
20,
22,
28,
32,
33]. These difficulties include real-time data collection, management, and analysis; real-time monitoring; software compatibility; real-time interactions; and technological implementation into existing systems.
It was not possible to create a graph of technological integration difficulties (G1). This is because there was no connection between each difficulty in the processed literature.
Figure 7 shows a summary of the difficulties.
Several possible factors have been presented that may complicate the integration of production logistics activities into digital models. For successful PL process digitalisation, it is necessary to identify and resolve the hidden causes, or even root causes, of the barriers. In the literature listed in
Table A1, the authors did not assign possible causes to all difficulties. For this reason, several articles were included in the causal analysis, and experiences gained during industrial work were also noted. The identified difficulties can be divided into two main cause chains. On the one hand, production logistics processes and their modelling pose challenges due to their complexity and dynamic changes; on the other hand, companies and developers are hindered in their digitalisation efforts by the lack of guidelines, standards, and knowledge of technology integration.
3.1. The Causes of the Difficulties from the Point of View of the Production Logistics: Transparency
In this subsection, the two main groups of challenges identified above are discussed. Two groups of causes clearly relate to the characteristics of logistics processes and the production system (G2, G3).
Table A1 shows that the difficulties affecting production logistics processes primarily involving forecasting and system complexity.
When analysing manufacturing processes, three categories are usually distinguished, similarly to what can be experienced in the case of the general business processes. In these cases, processes related to core activities, processes that supervise and control the former, and other supporting processes are categorised [
41]. Similar divisions can be found in manufacturing. According to one possible classification, processes are divided into three groups: core processes (ordering and typical manufacturing processes, extralogistics), manufacturing support processes (intralogistics, tool management, production planning and control), with other support processes (HR, IT, data management), and finally management processes (strategy development) [
42]. A common feature of these is that material handling and warehousing processes are classified as support/auxiliary processes. Separating logistics and manufacturing is a difficult challenge. The complexity of production also makes it difficult to model processes. PL processes mostly belong to support processes; without them, core manufacturing processes cannot be implemented. However, logistics activities may also occur within the core manufacturing processes.
Production logistics must adapt to the dynamically changing market and industrial environment [
43]. Today, inflexible, hierarchical management structures are no longer adequate in production and logistics management due to internal and external influences, disturbances, and the number of entities, their dependencies, and interconnections [
44]. Speed and responsiveness are no longer enough. Fact-based and data-based situation forecasting is becoming more important, and processes need to be flexibly adapted even before events occur. This requires visibility, that the correct information is available at the right time [
6]. Visibility is nothing more than “the ability to see important information regardless of its location… and the right information can be derived from accurate, current, complete, and usefully formalised data [
6].” The role of visibility in the supply chain has already become increasingly important [
6]. Technologies and standards for the secure collection, storage, and transmission of data have risen to the centre stage [
6]. The definition of visibility is extended to include supply chain transparency and traceability [
6]. Ref. [
45] discusses the quantifiable relationship between supply chain transparency and efficiency. It uses a five-level framework, with traceability appearing at level 3. The article emphasises that higher levels of transparency do not necessarily lead to greater efficiency. For this reason, strategic planning that also accounts for efficiency goals is necessary before pursuing digitalisation and transparency [
45]. The definition of visibility in production logistics is: “a degree of visual accessibility to accurate and timely supply and demand information that is critical to the performance of the tasks of specific participants in the production system” [
6].
Within manufacturing, problems such as process complexity and transparency, data quality and management, information management and communication belong to the problem field of transparency [
42]. Problems arising from processes and difficulties with digitisation are intertwined with process transparency. In intralogistics digitalisation, transparency means that every entity relevant to the process is identified and, as a result, appears in the digital model [
46]. Before technological transformation, process transformation may be necessary: Processes must be made transparent before technologies are integrated.
3.2. The Main Challenges of the Digitalisation of Production Logistics Processes: Maturity
Industry 4.0 technologies are no longer considered novel, and some authors are already discussing the fifth industrial revolution [
47]. However, the success rate of manufacturing companies’ efforts to transform digitally is less than 20% [
4]. Ref. [
48] identified five main problems related to the practical digitalisation of production logistics processes: poor use of existing systems, lack of change management, incorrectly implemented systems, lack of process approach, and inadequate system integration (too much/few). Even today, it is difficult to initiate this process, as the necessary support tools [
4], literature, and standards are lacking. It is therefore important to demonstrate the exploitable benefits of ensuring transparency and to quantify them [
4], as well as to develop solutions to facilitate the digitalisation process.
Many models have been developed to measure digital or Industry 4.0 readiness and maturity, which provide a maturity level for the overall company, usually after completing a questionnaire [
49]. Digital maturity indicates a company’s current ability to effectively integrate new technologies [
4,
50]. These models do not provide quantitative information on digital maturity, and most do not offer recommendations for advancing beyond the current maturity level [
20]. Unfortunately, companies are typically unfamiliar with their own digital maturity [
50].
In addition, other maturity measurement tools can also be used, such as Digital Factory Mapping (DFM) [
4]. It is based on physical value stream mapping (VSM), which provides information on material flow and bottlenecks. During value stream analysis, a diagram is created that maps all process steps involved in creating a product, from order to delivery. Both material flow and information flow are included in the diagram. For each step, it is necessary to determine which activities are value-adding (VA) and which are non-value-adding (NVA). Afterwards, the ratio of VA and NVA time in the production lead time can be determined [
51].
During DFM, after recording the VSM, digital gaps are identified in the bottleneck environment. The flow of information is analysed by assessing the digital maturity of the information source, the user, and the resources [
4]. This tool no longer evaluates the entire company, but only individual cross-sections. However, similar to previous models, it does not provide a numerical, exact indicator of digital maturity. DFM shows the manageable digital gaps.
3.3. Consolidated Relationship Between Causes and Effects
The difficulties of the articles in
Table A1 formed complex, composite graphs. The most significant reasons for this were the complexity and dynamics of production logistics processes. To gain a deeper understanding of the problems and examine their causes, additional articles and industry experiences focused on digitalisation, digital maturity, and transparency were consulted. These factors have been summarised in
Figure 8 using a cause-and-effect diagram, also known as a “problem tree”. This widely used tool can be used to identify the possible causes and effects of core problems in system improvement [
52].
Problem trees can be created for various purposes, but they are more of a tool and an aid for thinking through a problem than a final product [
52]. The main goal is to create a comprehensive and understandable diagram, so it is often recommended to create simpler, coherent trees with fewer causes and effects [
52]. When creating the problem tree, the aim was to illustrate the four identified cause groups at almost the same depth.
Based on the root causes [
53], it would be advisable to develop a qualification model that provides exact indicators and is process-oriented. The indicators could be used to quantify the current level of digital maturity and to facilitate digital progress with appropriate technological and strategic recommendations. A process-oriented approach would ensure that complex, difficult-to-predict production logistics processes receive appropriate emphasis during digitalisation. The evaluation model does not view the company as a whole from the “outside” and does not only examine individual cross-sections, but also every single value-creating process from the perspective of digital maturity. The results of the evaluation model could provide accurate feedback on the entire cross-section of the value-creating process in the three most important areas (i.e., process transparency, traceability, and controllability) and could be used as an accurate basis for formulating steps to promote development.
4. Methodology
Based on the reasons outlined in the previous section, the digital transformation of companies could be encouraged by making the benefits of applying technologies measurable and by formulating clear recommendations for the necessary technological developments in the case of different types of value creation processes. This section shows how a process can be modelled according to transparency criteria, and provides a test-case for applying the modelling technique.
The basis for creating a digital twin is the creation of a digital model [
28]. The digital model must represent the systems, processes, and entities that are essential to the twin’s functioning. This requires understanding production logistics processes, analysing their operation, and mapping the relationships between entities. It is also advisable to break down the creation of a digital model into two main steps.
First, it is advisable to build a static model of the system under investigation that is capable of representing the intended processes. The presented, improved process mapping technique for examining transparency plays a significant role in the creation of the system’s static model. One way to achieve this would have been to develop a completely new and problem-specific process mapping methodology. However, this approach was rejected in the initial phase of the research, as the existing standard process mapping methodologies under investigation proved effective in practice for representing any process, which cannot be guaranteed in the case of a completely new method. The static model, meanwhile, serves as a further simplifying factor, as it does not need to account for dynamic changes (G3). First, the problems classified in group G2 must be solved and handled. It can be assumed that the representation of processes and their recording in sufficient depth provides a suitable starting point for this.
This is followed by the creation of the dynamic model, which involves the selection and application of the necessary technologies (G1). This helps in transforming the digital static model into a digital shadow. It is necessary to take into account in this phase the challenges and dynamic disturbances belonging to the G3 difficulty group. The time factor also comes into play at this step, due to the system’s dynamics, making it possible to measure the time associated with planned activities and, perhaps even more importantly, with “unplanned” activities (disturbances). With the dynamic model, the system’s VSM and the DFM of the bottlenecks can be generated, enabling analysis of the process and its digital maturity.
Once a dynamic model capable of operating in real time is implemented, the simulation can be referred to as a digital shadow. The article does not discuss the recommended technologies in sufficient detail. If the system can be monitored (digital shadow) using the appropriate technologies, uncertainties and disturbances can even be observed in real time (G3). At the digital twin level, it is already possible to intervene digitally in physical processes (which requires new technologies, standards, and communication channels (G1)). Thus, disturbances can even be predicted using the appropriate methods. The methodological problems (G4) mentioned in the articles mainly stem from the application of digital twin in production logistics, for example, in digital twin-supported decision-making [
54] and real-time resource allocation. However, solving the problems identified in the G4 group may require all three levels of digitalisation (digital model, shadow and twin creation).
4.1. The Role of Transparency in the Digitalisation of Production Logistics
Based on the definition of production logistics visibility, it is necessary to apply a process description methodology that provides us with the right information at the right time [
6]. The first step is to identify what type of information may be needed in the system to manage the value streams in the value creation process optimally.
The starting point for this is to create digital transparency. In other words, the entities must be digitally mapped as the first step in digitalisation [
46]. This provides information about the entities in question, but tracking is not yet possible at this stage. Therefore, communication with and control of the entities is not feasible. By ensuring transparency, the first step in digital twinning, the digital model can be created [
9].
To map entities within the system, it is essential to have a detailed understanding of the processes, so technological transformation must be preceded by process discovery (process mining) and, in many cases, process transformation. In terms of transparency, four entity types have been defined, each of which can cover all entities in the production logistics system affected by digitisation. These are materials, resources, tasks, and locations [
46]. The tasks to be performed clearly define the value-creating processes carried out by various resources at specific locations (locations), using and transforming various materials, thereby creating “value.” These four sets with the data generated in connection with them can cover the entire information needs necessary for value creation management. With their help, the foundations for comprehensive, in-depth digital transparency can be laid.
Transparency is also linked to an indicator structure that examines the ratio of all entities identified within the value creation system and interpreted within the system (
Figure 9). This helps to quantify transparency [
46].
Figure 9 shows the four entity types: Location (L), Material (M), Task (T), and Resource (R), each as a set [
46]. Entities within the system are represented by vertices that can be connected to each other (edges) [
46]. For the digital mapping of a production logistics system, the relevant entities and their characteristic attributes must appear in the digital model. The digital transparency framework defines levels that refer to the depth of identification within each entity type (L
1, L
2, … R
d). The more deeply an entity type (e.g., M
4) is identified, the more detailed its mapping in the digital model.
4.2. An Improved Process Description Technique and a Simple Tool to Quantify Digital Transparency
Why is the detailed process modelling important in the value creation process?
It helps to map and understand the value creation process using a standard process description language.
It quantifies the ratio of value-creating and the necessary but non-value-creating activities characteristic of the process under review and specifies the number of activities related to the identification of entities.
It shows which entities need to be identified for the given value-creation process to achieve the desired level of digital transparency (what resources, materials, and locations appear in the process, which tasks are performed during it, and which of these should also be digitally mapped).
Later, the flow of events within the processes can be monitored and analysed at the digital shadow level. The events will generate the necessary interventions at the digital twin level. To do this, it is necessary to determine which interventions will be triggered by which events (positive and negative deviations from the plan [
28]).
To understand and explore a process, it is first necessary to create a structured, visual representation of it. Various process description languages and options are available for modelling (EPC, BPMN, UML, IDEF0) [
55,
56]. EPC (Event-driven Process Chain) was chosen because of its simple, expandable, modifiable toolset and its event-centricity.
It should be emphasised that any process modelling language may be suitable. In this case, there is no single correct solution. However, BPMN2.0, UML, and IDEF0 can all be structured as a series of activities. This is not advantageous because the use of events is not mandatory in those process description languages. Events can show the start and the end of an activity, which is how their duration can be measured. Later on (DS and DT levels), the necessary response mechanisms can be determined using the events found in the flowcharts. In addition, both BPMN and UML operate with numerous objects, which can result in the same process being described in many ways using different objects. However, the disadvantage of EPC is that it cannot be easily converted into machine-readable form (e.g., XML).
Table 2 compares different process modelling languages.
In the EPC standard, the process consists of a chain of events (red hexagons), activities (green rounded rectangles) and logical operators, supplemented by information (blue rectangles) and responsibilities (yellow ellipses).
The further improvement of the standard EPC toolkit for transparency testing was carried out in several steps. Firstly, activities were categorised by value creation. Orange rounded rectangles indicate necessary but non-value-adding (NNVA) activities, while green rounded rectangles indicate value-adding (VA) activities. This allows the process to be visualised under the lean approach. The next step is to highlight the transparency entity types. This is where the identification steps (new NNVA activities) appear. Here, data can be generated during the process to increase transparency into value creation. These are located within the blue rectangles and provide information about the different entity types (the four types defined above) as the process progresses.
Figure 10 shows the two-step development.
The left side of
Figure 10 shows the original process, as described using the EPC standard. In the middle, the classification of the activities has been highlighted in relation to the value creation. On the right side, the process on the left has been supplemented with activities necessary to increase the digital transparency of the value creation process. It is seen that these identification steps are also NNVA activities (SNNVA).
In process modelling, it is important to consider the depth to which the given process should be examined. A process can be broken down into various smaller units (from sub-processes to movements) [
46]. Based on this, a process can be examined in several process diagrams of varying depth, depending on whether the activities in the process represent movements, operations, activities, or even sub-processes. This fundamentally determines the possible and/or necessary number of digital identification steps. For example, when breaking down a process into sub-processes, it is not possible to perform a deeper level of digital identification (e.g., if the digital identification of any movements is necessary in some phases in the value creation process to increase the digital transparency, then the process must be broken down into those phases and then into movements).
The process modelling procedure described above can be used to make processes transparent; moreover, the technique also plays an important role in quantifying digital transparency. It is important to emphasise the role of this technique in digitalisation. The digital twin creation process should be divided into several milestones; in this long way, the application of the above-presented process modelling technique is the step preceding the creation of the digital model. In this way, a static model of a process can be produced. This technique can assist in the creation of digital models, as there is no automatic data exchange [
9] (at present) between the physical process and its digital representation. The following simple ratios can be generated with its help:
where
can be defined as the ratio of the number of value-added activities () and the total number of activities () during the value creation process;
can be defined as the ratio of the number of necessary but non-value-added activities () and the total number activities () during the value creation process;
can be defined as the ratio of the number of identification activities () and the total number activities () during the value creation process;
can be defined as the ratio of the number of value-added activities () and the number of necessary but non-value-added activities () during the value creation process;
can be defined as the ratio of the number of identification activities () and the number of necessary but non-value-added activities () during the value creation process.
These indicators use the number of activities in the process diagram. If no identification occurs during the process, the R3 and R5 indicators will return a value of 0, and the NNNVA value will indicate only the number of necessary but non-value-adding activities in the process. Thus, the R1, R2, and R4 indicators can assist engineers in their process improvement tasks. Of the above formulas, only R4 can take on a value greater than 1. However, practice shows that the duration of VA activities is generally shorter than the time value of NNVA (as shown in VSM diagrams). However, this is not necessarily reflected in the number of NNVA and VA activities.
The indicators can be used to assess the current situation. The identification steps can be used to quantify the extent of change at different levels of transparency. In future research on digitalisation, it will be important to develop a set of criteria that shows the ideal value of production logistics processes in terms of identification for different industries (chemical, metal, wood, etc.). In the case of transparency indicators, it cannot be said that the goal is clearly to achieve a higher identification value (due to the company’s own goals). For this reason, it is not possible to clearly define a “good” value (higher → better) for the indicators associated with the flowchart. Achieving a higher number of identification activities can cause the problem of over-identification. In the case of over-identification, a higher-than-expected transparency indicator is created, which exceeds the company’s goals and can have negative consequences (e.g., increased throughput time due to multiple identification steps, unnecessary data generation and unused data, etc.) [
45].
The goal of the current technique is to make changes in transparency measurable at the digital model level, where there is no automatic data flow. After the digital model is created, it is necessary to select the appropriate technologies for the identification points that will feed data into the digital shadow. During identification tasks, scanning the code (usually) also generates a timestamp, allowing the elapsed time between scans to be measured. The result will be a dynamic model. It is also worth examining transparency at the level of the dynamic model. Based on the indicators, in the case of the dynamic model, it is no longer the number of activities but the duration of the activities that will be of significant importance. Dynamic disturbances also appear digitally represented in the digital shadow, so these phenomena can also be examined. The resulting time periods can indicate development directions for reducing the duration of NNVA and NVA activities.
The last row of
Figure 11 contains all identifiable entities.
Figure 12 shows the entities identified in the given case and the relationships between them in the transparency framework. The flowchart can be used to specify the number of entities identified by entity type, thus enabling the calculation of the digital transparency indicators defined in ref. [
46] (e.g., number of identified entities of a given entity type.
4.3. A Test-Case in Laboratory Environment and Its Results
A bottom-up approach can be used in production logistics to model the flow of materials and information. The basic/elementary unit of material and information flow is the workplace (or, in the case of storage, the storage location) [
49]. Various production logistics service activities can take place at or between these locations (material handling). In the creation of a digital model, step 0 is the mapping of processes, which helps understand how the processes work (essential for simulation) and identify the entities that appear in the process. Once the entities are known, the selection and installation of identification technologies can begin.
Figure 12.
Relationships and entities in an examined test-case.
Figure 12.
Relationships and entities in an examined test-case.
A simple assembly process was performed in the departmental laboratory (
Figure 13). The assembly task took place on a CREFORM assembly station, where the task was to build finished products with different complexity from different components which were represented as simple building blocks, and stored in different coloured KLT (Kleinladungsträger) boxes (red, yellow and green) in dedicated positions on the storage equipment of the assembly station. The assembly tasks were performed on the assembly table by one operator, and the finished product was placed at a blue box in a dedicated place next to the assembly station.
Figure 13.
The laboratory test environment, one of the finished products that needs to be produced and its BOM (Bill of Materials) list.
Figure 13.
The laboratory test environment, one of the finished products that needs to be produced and its BOM (Bill of Materials) list.
During the test, the finished products to be produced were first determined. This was followed by the recording of process flowcharts and their conversion in accordance with the improved EPC process modelling toolkit described above. Finally, an analysis was performed to determine how the number and proportion of different activities changed in different situations and what impact digitisation had, represented by the identification of predefined entities (locations, materials, tasks, resources) during value creation in the assembly process. The limitations of the test were as follows:
Three different finished products were built, with the different variations represented by the colours of the building blocks used in the components.
A maximum of three types of raw materials were used in a subassembly.
Assumption: the raw materials needed for assembly were available in the KLT boxes.
Only the activities of the assembly operator were included in the flowchart; the activities of other operators (e.g., activities of the logistics operators who feed the station with raw materials) were not considered.
No defective raw materials (components) may occur during the assembly process.
No waste is generated during the assembly process.
No main part category was defined in the BOM list of the finished product.
When taking components out of KLT boxes, the assembly operator had to check whether the box was empty and, if so, move it to the flow rack (channel) under table.
The tests involved examining assembly flowcharts for finished products of varying complexity but with the same process depth (so-called operation elements). The finished products labelled “X” result in a column structure. These columns are made up of components with varying numbers and colours (X, X2, X3…). Finished products labelled “Y” represent the structure of a “house”, which is made up of three different subassemblies. These finished products consist of the same number of components (24 pcs), but the subassemblies can be built using components of the same (Y, Y1) or different colours (Yc) too. Finished products labelled “Z” are created by combining multiple “house” (Y) structures.
Table 3 shows some of the results for seven final products. The header row contains the different types of examined finished products. The first two rows show the number of colours and the number of building blocks used in the assembly of each product. The next rows in the table show the number of activities appearing in the flowcharts. Finally, the last five rows show the calculated (R
1…R
5) ratios.
The “Number of tasks” row shows the sum of the fourth and sixth rows, because all activities in the flowchart can be classified as value-added (VA) or necessary-non-value-added (SNNVA) activity in the value creation process (note that the non-value-added (NVA) activities have not been modelled in the investigated use-case). As mentioned earlier, the identification (ID) activities can be qualified as necessary but non-value-added activities; therefore, the number of SNNVA activities includes ID activities. The “Number of NNVA” row shows the necessary but non-value-added activities that are not defined as an identification activity, but are necessary in the assembly process. The “Number of non-identification tasks” row shows the sum of “Number of NNVA” and “Number of VA” rows. The “Number of identification activities (ID)” row shows the sum of the four rows below it, which represent the total number of identified entities in cases of tasks (ID_T), materials (ID_M), resources (ID_R) and locations (ID_L) entity types.
Appendix B contains the data used to examine one of the transparency cases for product “Z”.
5. Results and Discussion
The research revealed that, unfortunately, most digitalisation developments in the manufacturing industry are currently unsuccessful, although digitalisation itself is a well-researched field. The digitalisation of production logistics activities faces a particularly large number of problems. On the one hand, there are the general obstacles to digitalisation mentioned above (lack of knowledge, measurability barriers). On the other hand, there is the separation and modelling of production logistics processes from manufacturing. The lack of digital technologies makes it very difficult to identify some important problems, and it is also difficult to measure the main properties of various activities. At the same time, without the digitalisation of processes, it is not possible to apply complex, advanced digital modelling solutions (e.g., digital twin technologies) to the control of value creation processes.
The presented process modelling technique can be used to quantify value-creating and necessary but non-value-creating activities in a process, and their ratio can also be specified. In addition, the necessary identification steps (activities) can be specified, making the process transparent and enabling the digital model to be mapped.
The quantifiable outputs were presented using the example of a simple assembly process realised in a laboratory environment. By assigning the identification activities to transparency entity groups, various correlation indicators can be specified.
Table 4 details these for all examined products.
In the recorded assembly process, there are positive correlations between the quantitative values listed in
Table 4. For example, there is a complete correlation between the number of colours and the number of identifiable locations. A weak positive correlation is found between the number of identifiable locations and the number of identifiable materials. In the example, regardless of the investigated final product, the number of value-added activities (VA) is the lowest, followed by the number of necessary but non-value-added activities without the identification tasks (NNVA). The number of activities responsible for identification (ID) is greater than the sum of the previous two values (defined as the number of non-identification tasks).
Identification activities play a prominent role in transparency flowcharts. In practice, each of these activities requires some concrete solution of technological identification. Due to the large number of identification activities, it is critical to examine the time required for each identification step.
In the case of all identification steps in the value creation process, a predefined identification technology needs to be used to gather the necessary data. On the one hand, the goal is to select such identification tools that minimise the operational time required and, therefore, the product throughput time. On the other hand, the time needs can also be influenced by the complexity of the gathered dataset, which can have a strong correlation with the capability of the applied identification technology. Moreover, the technologies which can be applied can also be determined by the concrete physical environment, the handled materials, the applied material handling equipment, the properties of the value creation process, etc. Therefore, there is a strong need to give the right answers to these questions, and it can be worth building complex models to investigate these trade-offs.
For these reasons, it is also important to emphasise that identification for companies involves not only selecting appropriate technologies but also clearly defining transparency goals. In most cases, a high level of transparency leads to an increase in operation time and, consequently, increased throughput time (depending on the applied identification technology). For example, in the case of using optical identification (such as simple QR codes with scanners), this additional time can be much longer than in the case of using RFID technology (such as simple passive tags with RFID gates). When setting goals, it is also necessary to decide what level of increase in throughput time is acceptable in order to achieve transparency. There may be cases where the increase in throughput time required to achieve the desired level of transparency is unacceptable to the company.
In the previously presented laboratory test environment, some experiments have been figured out to discover and investigate this effect. The results of the experiments can be seen in
Figure 14, which shows how the throughput time with QR code identification for assembling product Z (consisting of the most components) changes as the need for the digital transparency increases. The digital transparency was represented in variable GDTI (global digital transparency indicator), where the baseline of the assumption was that the relative importance of entity types (L, M, T, R) leads to equal weights (w = 1/4). The equality of weights regarding the above defined entities is not necessarily the case in a realistic value creation process, because the preferences of the companies can be different in these terms. For this reason, it will be important in the future to develop a multicriteria model that reflects the priorities of value-creating companies and aligns the entity types and their weightings accordingly. An AHP (Analytic Hierarchy Process)-based approach may be a suitable way, where the relative importance can be clearly defined, and based on these, the exact weightings can be calculated [
45].
In the case of Z product, the results of indicators R
1…R
5 were also recorded for different GDTI values (
Figure 15). According to the formula, the sum of R
1 and R
2 indicators is 100%. The R
1 and R
4 indicators decrease as the GDTI increases, due to the low number of VA activities. The R
3 and R
5 indicators are directly proportional to the increase in GDTI in the current example. The drastic (7×) increase in throughput time is due to QR code identification.
The phenomenon mentioned above seems to be clearly evident. At this point, a question can be defined which can drive the further research activities. Can an optimal level of GDTI be defined, where the throughput time increase is minimal, but the level of digital transparency is sufficient to achieve the company’s goals while considering the main constraints of the process, the technology and the environment? One possible answer may lie in a complex cost model that quantifies the costs of increased digital transparency, the effectiveness gains from it, and the losses from reduced productivity. The costs of increased digital transparency include the cost of the technologies involved. Identification can be carried out using a variety of different auto-ID technologies. However, both the cost of these technologies and the value of the information they provide vary across different identification steps. For this reason, it is crucial to weigh the entity types in accordance with corporate objectives.
Figure 14.
Proportion of activities over time and progress of global digital transparency indicators (w = 0.25).
Figure 14.
Proportion of activities over time and progress of global digital transparency indicators (w = 0.25).
Figure 15.
R1–R5 indicators and progress of global digital transparency indicators (w = 0.25).
Figure 15.
R1–R5 indicators and progress of global digital transparency indicators (w = 0.25).
In the investigated example, all activities were recorded at the same depth level (operation-level) in the flowcharts. It is also an interesting research task to examine the processes at different depth levels (motion…activity…subprocess) [
46]. This may reduce the number of identification activities, but the change will also affect the process’s transparency indicators; therefore, they should be examined together. The higher the level at which a process is examined, the fewer activities (2) will be included in the flowchart. On the other hand, if a process is described at a more detailed level—e.g., every motion is described—the result will be a much more detailed flowchart containing more activities. However, the latter does not mean that every single movement needs to be identified, although it needs to be considered that the number of entities belonging to the “Task” (T) entity type in the digital model (which are the identified activities in the flowchart) cannot be greater than the number of activities in the process diagram.
The test in a laboratory environment showed how the transparency process mapping technique can be applied in a simple workplace. In order to produce a transparent digital model at the entire shop floor level, it is necessary to record the processes taking place at and between all relevant workplaces. This is time-consuming because the greater the need for transparency in the processes, the more detailed the process recording (disaggregation) needs to be. This results in the identification of more activities and an increase in the length of the process flowcharts. However, this is probably an essential step, even the basis for creating a digital twin. For this reason, it is worth investing the appropriate time and energy in mapping the processes.
In addition, by mapping the processes, the company can also experience further benefits. With a transparent process, it is easier to find bottlenecks and waste. Processes that have already been properly mapped are easy to improve and even standardise, which helps the company’s operations.
Managerial Insights and Implications
The study identified and collected several difficulties found in the international literature. Four groups of problems were defined (G1…G4), and the causes were visualised with the cause-and-effect diagram. Based on the identified difficulties, the design of the digital twin must be approached in much greater depth.
Figure 16 shows a possible digitalisation roadmap. The guiding questions are placed in the centre of the figure, the applied digital solutions and the level of digital intelligence on the right, and the standards and methods (
Table 5) that provide assistance for each level on the left. Without a digital model, it is not possible to create a truly effective digital twin (functioning as a mirror), and this requires digital mapping the processes that occur within the company. The entities involved in the processes will determine the entities to be represented in the digital model.
Without this, it is impossible to speak of a “true digital twin,” only a simulation, a digital shadow, or a “partial” digital twin that is unable to reproduce every event and even every process at the same level. These transparency issues can later lead to more complex decision-making and data extraction challenges during operations (e.g., in the case of product recalls).
In order to determine the most appropriate approaches to decision-making and the development of digitalisation projects, it is worth addressing several sets of questions before embarking on production logistics digitalisation efforts. The literature research has shown that a digital twin reflecting the entire system is not always necessary. In some cases, a digital model or digital shadow may be sufficient. (When designing production logistics systems and layouts, a digital model is sufficient. A digital shadow is needed when the goal is real-time monitoring and observation. According to ref. [
19], it is not always possible to implement fully digital twins.)
The first question group to be addressed is the goal of digitalisation. Which employees and which production logistics control level (operational, strategic, etc.) will work with DT, how will they apply it, and what KPIs (key performance indicators) will need to be produced? Thus, the task will not only be to create DT, but also to analyse and make appropriate use of the information derived from it, and to develop a support mechanism based on DT. With a precisely defined and limited goal, it is possible to avoid a group of problems identified by ref. [
48] (“poor use of implemented systems”). To do this, it is necessary to define the user group, the method of use, and the KPIs to be produced, and to ensure that users are trained in when and how to use the DT-based application. After this, the expectations (and their importance) must be clarified. Is the goal to produce a simple digital model, a simulation, a real-data-driven DS, a partial or a full DT? Is the main task the visualisation, the monitoring, or the control of processes, and what kind of timing is expected (real-time, latency)?
The second question group concerns the physical background, because it is first necessary to know in detail the entire physical value-creation system. During the examination, the production logistics system architecture specific to the given company must be explored: What is its material flow topology, what is its relationship to production, what processes does it consists of, and which objects must be defined as the main components of the entire logistics system? It is also very important to understand the maturity of logistics technology, including the level of automation in material handling and storage, the applied material handling and storage equipment, and the information technologies in operational control.
In the next phase, it is necessary to assess the company’s current level of digital maturity. Determining the current state of digitalisation helps identify the direction for necessary further measures: process transformation, technology integration, organisational changes, etc. At this point, it is useful to create a system model focused on the digitalisation goals. In the model, the relevant entities (L, M, T, R) involved in the processes and the relationships between them have to be defined. Moreover, the indicators to be monitored must be defined for the entities and relationships to be digitally mapped. The correct examinations will discover the weaknesses and show the ways for further improvements to be needed.
The last question group considers the technologies essential to digitalisation. Perhaps one of the most important questions is which technologies have been integrated in the operation so far, what they are used for, and whether their full potential is being exploited. If new technologies need to be introduced (e.g., DT-specific software and hardware requirements), it is worth first looking at industrial use-cases and available standards in the field. Finally, it is important to remember that employees who use and come into contact with these technologies must be prepared (both professionally and psychologically) and receive appropriate training.
The literature research has revealed that there are different approaches to DT creation in relation to production logistics. DT creation must be preceded by DS, which enables one-way automatic information flow. Within the production logistics system, a digital shadow can be used to track the movement of materials and resources, monitor changes in materials (consumption, transformation), and track tasks by monitoring events as they occur. If the object of DT creation is to reflect the entire production logistics system, multiple combinations of DTs will likely need to be used [
14] and synchronised due to the complexity and size of the system. After identifying the production logistics processes, it is necessary to assign one DT to each process using the appropriate technologies.
However, there are processes within production logistics systems that are also influenced by the topology of the system. Production logistics encompasses the logistics activities that occur before, after, and during manufacturing. The material flow is realised in or between the subsystems of the production logistics topology. The subsystems can be distinguished based on the materials flowing within them. The material flows take place within or between the subsystems of the production logistics topology. The subsystems can be distinguished based on the materials flowing within them. The movement of raw materials before the production, the movement of semi-finished products, auxiliary materials, and tools during production, the movement of finished products after production, and the movement of waste throughout the system can be realised, so it is recommended to examine and twin four topological subsystems (
Figure 17) in addition to digital twinning of processes. With the help of topological subsystem twinning, the entire manufacturing process becomes transparent, traceable and controllable.
Each box defined within the value creation system model in
Figure 17 can be interpreted as a separate DT for a given subsystem (within which separate process-oriented DTs are also located, e.g., storage-DT, material handling-DT, etc.). A more complex DT encompassing the entire PL system can therefore be thought of as multiple nested DTs that cooperate and influence each other.
These facts highlighted that developing a digital maturity model would be crucial to evaluate companies’ processes using precise indicators. As a possible first step in this direction, an improved process modelling technology was presented, based on the standard EPC process description language, which helps to understand and discover the complexity of the value creation process and to interpret the digital transparency via additional indicators. However, this needs to be supplemented with further steps, such as the creation of indicators that take corporate goals into account and guidelines with sufficient detail to help or even guide companies on the path to digitalisation.
6. Conclusions and Future Research Directions
In summary, the paper identifies several factors that hinder digitalisation and digital twin creation, which can be categorised into different groups. These challenges must be addressed collectively, but separately, during the digital transformation process. It is recommended to start digitisation by mapping current processes (G2) with a standard process mapping technology (transparency flowchart). The article presents a transparency flowcharting technique that uses an enhanced EPC language. Identification activities and defined entity types ensure the transparency. This makes it possible to know where a given object (resource) was, what task it performed, and with what (materials and resources) it worked during intralogistics processes. Exploring processes is essential when creating a digital model of a company. This is followed by the selection, preparation, and integration of the appropriate identification technologies into the existing system (G1). New methodologies (G4) need to be applied when preparing the digital model. Still, the use of digital shadow and twin data also requires the development of new optimisation and decision-making methodologies, as well as the adaptation of existing ones. Automatic data flow first appears in digital shadows, so the need for digital technologies increases dramatically in this mapping. This requires technologies (G1) and data processing methods that are capable of identifying and displaying not only events and states (G2) according to the planned operation, but also uncertainties and disturbances (G3) in the digital shadow.
The process mapping technique presented in this paper can be defined as the first step in creating a digital model, but it cannot solve the many difficulties identified on its own. Thus, several directions were defined for the next steps of the research:
The first research direction fits best with the current paper. It involves further examination of digital transparency indicators. Using the improved process mapping methodology, it is possible to calculate the global digital transparency indicator for each flowchart even before the introduction of identification technologies. This means that the adequacy of the indicator for the expected transparency can be specified and verified as early as the design phase.
The second research step is to develop a digital maturity model (which, unlike the current ones, provides companies with accurate indicators of their current maturity level). The goal is to create a company-oriented, bottom-up model that starts the digital maturity assessment at the shop floor level. This would also use indicators related to digital transparency. Such a model could complement the currently known digital maturity models for the entire company, in which HR, IT, and existing technologies play a major role in questionnaire-based assessments.
The third research direction focuses on technological examination. The implementation of digital shadows and digital twins requires technological integration involving IoT and other sensors. Based on literature research, it has been found that companies may encounter difficulties not only with integration but also with proper use. For this reason, it is very important to examine existing studies on technologies in detail and supplement them with industrial experience in order to produce guidelines that can help make technology integration more effective. In addition to recommending technologies, these guidelines should also cover preparatory and integration steps, as well as possible difficulties and sources of risk.
The fourth direction is no longer directly related to the creation of DS and DT, but to their use. After creation, the use of DS and DT data is crucial. This requires the development of various decision support and optimisation methods. In addition to the methods, it is also worth preparing new KPIs and a control panel for monitoring, which will facilitate managerial use.
As a continuation of the research, it is also necessary to examine more complex, realistic, and other production logistics processes. This will enable the development of a standard framework to guide companies in their digitalisation.
Standards are essential in a complex process such as DT creation. In addition to standards, descriptions of industrial applications (use-cases) and implementation steps would also be very useful. The primary additional task is to gain a detailed understanding of the standards, collect examples of industrial applications, and analyse them to determine whether a step-by-step instruction sequence can be established to ensure successful DT creation within the company. Promoting current standards and clarifying the relationships and connections between them are particularly important tasks. A future research question is why these standards have not appeared in the processed literature. Why is there a gap between industrial use (where standards are presumably used) and scientific research articles? Another question to be examined is in what areas standards help companies, and what other standards need to be created.