Next Article in Journal
A Simplified Design Method for the Mechanical Stability of Slit-Shaped Additively Manufactured Reactor Modules
Next Article in Special Issue
An Approach for Predicting the Lifetime of Lead-Free Soldered Electronic Components: Hitachi Rail STS Case Study
Previous Article in Journal
Computational Fluid Dynamics Heat Transfer Analysis of Double Pipe Heat Exchanger and Flow Characteristics Using Nanofluid TiO2 with Water
Previous Article in Special Issue
Predicting Quality of Modified Product Attributes to Achieve Customer Satisfaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Scalable Compositional Digital Twin-Based Monitoring System for Production Management: Design and Development in an Experimental Open-Pit Mine

by
Nabil El Bazi
1,2,*,
Oussama Laayati
2,
Nouhaila Darkaoui
2,3,
Adila El Maghraoui
2,
Nasr Guennouni
2,
Ahmed Chebak
2 and
Mustapha Mabrouki
1
1
Laboratory of Industrial Engineering (LGIIS), Faculty of Science and Techniques (FST), University Sultan Moulay Slimane (USMS), Beni Mellal 23000, Morocco
2
Green Tech Institute (GTI), Mohammed VI Polytechnic University (UM6P), Benguerir 43150, Morocco
3
School of Information Sciences (ESI), Mohammed V University (UM5), Rabat 10100, Morocco
*
Author to whom correspondence should be addressed.
Designs 2024, 8(3), 40; https://doi.org/10.3390/designs8030040
Submission received: 4 March 2024 / Revised: 3 April 2024 / Accepted: 8 April 2024 / Published: 7 May 2024
(This article belongs to the Special Issue Mixture of Human and Machine Intelligence in Digital Manufacturing)

Abstract

:
While digital twins (DTs) have recently gained prominence as a viable option for creating reliable asset representations, many existing frameworks and architectures in the literature involve the integration of different technologies and paradigms, including the Internet of Things (IoTs), data modeling, and machine learning (ML). This complexity requires the orchestration of these different technologies, often resulting in subsystems and composition frameworks that are difficult to seamlessly align. In this paper, we present a scalable compositional framework designed for the development of a DT-based production management system (PMS) with advanced production monitoring capabilities. The conducted approach used to design the compositional framework utilizes the Factory Design and Improvement (FDI) methodology. Furthermore, the validation of our proposed framework is illustrated through a case study conducted in a phosphate screening station within the context of the mining industry.

1. Introduction

In the last few years, the idea of a DT has been evolving as a transformative agent in different industries. Physical objects, processes, and systems can now be virtually represented with the help of digital twins (DTs). This virtual model enables live tracking, exercising, and improving the functionality of system operations [1]. A DT makes use of past usage analysis, which is an important way to attain complete knowledge about the functionality of an IoTs device over its lifetime [2]. The application of this technology provides an immense boost to the operation and maintenance of such systems, especially in terms of hindering circumstances, namely mining [3]. The mining industry is renowned for being extremely tough; it needs all equipment to be highly available and reliable to ensure safety as well as to maximize productivity. The old ways of monitoring and preventive maintenance may be implemented, but they provide delayed responses that result in unexpected downtimes and reduced efficiency. But DT technology is now being introduced, which has virtual models made for real-world monitoring and simulation purposes. This technology has brought about a new ray of hope in terms of the productivity and efficiency of maintenance practices, where mining companies can delimit equipment production and decrease downtime.
In the competitive global market, deploying DT technology is pivotal for enhancing industrial efficiency. As a key component in the Industry 4.0 roadmap, DTs bridge physical and virtual realms, offering dynamic models that accurately reflect real-world entities. Within the context of the Fourth Industrial Revolution (Industry 4.0) [4], a scalable compositional DT framework emerges as a systematic solution for connecting the physical and virtual worlds [5].
This paper is dedicated to the elaboration of a scalable compositional framework, with a specific emphasis on the adoption of this framework in a real mining plant as a tool for production management improvement. A systemic approach, the framework incorporates a series of solutions intended to handle the complexity of modern industrial operations as well as streamline these operations to achieve smooth productivity and flexibility. Among others, this framework enables data entities to communicate with each other, thus spreading knowledge and linking information, creating a more intelligent contextually aware DT. Framework specifies a modular and integrated environment where users use DTs according to their needs and develop them.
The main contributions of this paper are as follows:
  • The design of an architecture for a production management system tailored to the mining operations of the experimental pilot of our research, the experimental open-pit mine.
  • The development and implementation of a scalable compositional framework for a DT, facilitating an efficient PMS.
This paper is organized as follows: Section 2 mainly presents the relevant background and related works on digital twin technology, monitoring systems, and production management systems. Section 3 contains the research methods and materials, which include a data modeling process and the FDI-based methods used to conduct the site survey at the industrial mining site to extract the value chain of the industrial mining site, as well as the design of the database architecture. Section 4 includes the digital twin infrastructure modular and scalable compositional monitoring framework. Section 5 presents the experimental data, the results obtained, and their analysis and interpretation. Section 6 concludes our study by highlighting the potentialities of the presented project and provides possible research avenues for further efforts.

2. Background and Related Works

Within this section, we intend to provide a comprehensive analysis of the key components of Industry 4.0 and automated machine systems that are created with the most recent technology and techniques, such as DT, monitoring systems, and PMS. These elements are what maintain the industrial machines’ operation today, serving as the most important constituents in developing more efficient, precise, and adaptable production technologies. Furthermore, we will be tackling the foundational principles of these components, undressing the practicality of these ideas, and critiquing the recent research, as well as the practices in the field. By this means of disentangling and connecting these important points of divergence, a clear and well-researched understanding of the multidimensional and changing scenario of these issues, their relationships, and the field of science that guides the operational decisions in modern industry is the aim.

2.1. Digital Twin

2.1.1. Definitions and Associated Attributes

Digital twins (DTs), which originated as an aerospace concept, facilitate virtual versions of objects or systems. While it was originally used for aircraft analysis, its usage has spread to multiple areas. NASA uses a widely accepted definition that treats a simulation as a digital tool based on advanced models and real-time data, which can accurately resemble the original system. In simpler terms, DTs are akin to a pair of twins, who can either be tangible or digital [2,5]. Crucially, DTs encompass three key elements: a physical being, a virtual mimicry, and a network of paths for real-time synchronizing. DT can be presented in several ways, but the simplest explanation is that it provides scalable and secure connections between relevant data from the physical to the virtual world. Figure 1 illustrates the differences between a digital model, a digital shadow, and a digital twin, which are three different integration levels illustrated in the same figure. Generally, they are all associated with data exchange among physical, digital, and whatever else (model, shadow, twin) [6].
The characteristics of DTs include real-time reflection; the physical and digital dimensions coexist, allowing for the synchronous real-time data of the current states to be made available in one place. A cycle of fusion and divergence between the real and virtual worlds is enabled based on the continuous integration of current and historical data. A DT has the potential to be self-updating, i.e., it dynamically updates its data simultaneously, allowing the virtual part to evolve by comparison with the real part [7].
By depicting the feature essential properties of DT for its incorporation, we have created a MoSCoW diagram, as represented in Figure 2, laden with their significance and impact. In the “Must-have” category, we have essential components like “Data Flow/Updates”, which is the most livewire that keeps the DT algorithm in real-time modes, thereby giving it its relevance and accuracy. “Real-time Synchronous” operation, in turn, updates the DT with the immediate data and changing of conditions of the physical counterpart in a real-time manner [8], ensuring the quick response requirements. “Connectivity” is fundamental giving the possibility of working and exchanging ideas in a seamless setting with the sundry systems and “Simulatability” is thus used to simulate the behavior of the physical asset with the desired level of accuracy. The “Active Interaction Applicability” is a crucial contributor for the “Should Have” category, which enhances adaptability, “Data Fusion” increases the level of insights, and “Interpretability” helps with the generation of data which are actionable. In the “Could Have” realm, aspects like “Self-improvement”, “autonomous” decision-making, and envisioning consequent roles for “Artificial Intelligence” provide some more capabilities, though they are not mandatory. Notably, otherwise, there is nothing described in the “Do not have” category, which means that the DT should accomplish this objective in order to obtain the best result in different industrial situations.

2.1.2. Current State of Digital Twin Research

In the current state of DT research, worldwide efforts thrive, with numerous frameworks and architectures emerging. In the manufacturing context, the DT serves as a virtual representation mirroring the real-time operation of the physical manufacturing asset, encompassing machines, production lines, the shop floor, products, and workers. This digital counterpart empowers real-time monitoring and the prediction of future behavior, performance, and maintenance needs [1,5].
Notably, three primary application scenarios for DT exist: supervisory, involving real-time status provision for decision-making; interactive, where DT autonomously adjusts parameters upon disruptions; and predictive, where DT forecasts the asset’s future state for a corrective action [9].
Within manufacturing, the DT rejuvenates multiple tasks:
  • Equipment health management: DT enhances system and worker reliability, availability, and safety through seamless monitoring and informed maintenance decisions [10]. For example, it estimates the remaining useful life (RUL) of equipment components, enabling intelligent design and timely monitoring for predictive maintenance [11].
  • Production control and optimization: Dynamic manufacturing environments require continuous monitoring and optimization [12,13]. DTs use real-time data to optimize throughput by adjusting controllable parameters [14]. They also react in real time to disturbances on the shop floor [15].
  • Production scheduling: Traditional static production scheduling methods struggle with process uncertainty. DTs dynamically elaborate or verify schedules during disruptions. They even communicate with robots for optimal task scheduling.
The DT for manufacturing usually consists of a physical element, a virtual element, and a real-time information exchange between them, assisted by the IoTs, data collection and storage tools, big data analytics, and ML [15]. Making the privacy and security of data a top priority still stays. DT research has experienced rapid progress, especially in manufacturing, but defining the research areas remains a question because there is no unified definition [16]. To promote a base of knowledge shared among members and address implementation problems, further study is still required even though progress has been made in the realms of equipment health management, production control, optimization [17], and scheduling [18,19,20]. Nevertheless, there is still a noticeable gap, which is mostly defined by the lack of hands-on examples of real-live implementation. Although the technology is mostly theoretical, it has not been implemented in a clear manner, and its practical cases are limited [5,16].
These areas of research acknowledge the application of advanced analytics to disrupt conventional data analysis [21], such as equipment maintenance, simulations for the greater in-depth examination of processes and factories, and the utilization of emerging technologies like virtual and augmented reality for data accessibility improvement [22].
Although progress has been achieved, there are still gaps in research, e.g., the lack of practical implementation cases and suitable solutions for small- and medium-sized businesses. The adaptability and effectiveness of DT frameworks across different industries and scenarios also requires further study [23,24]. In the following sections, we examine the establishment of a DT-oriented integrated monitoring system that can enhance the effectiveness of a smaller enterprise to be in charge of its operation.

2.2. Monitoring System

In the realm of industry and manufacturing, monitoring systems have emerged as pivotal drivers of efficiency, safety, and productivity [25]. These systems, founded on the real-time tracking and analysis of processes, equipment, and operations, provide invaluable insights for decision-makers. Their multifaceted benefits encompass heightened operational control, predictive maintenance, quality assurance, and resource optimization, finding applications across diverse industrial domains [3]. However, challenges persist, including the integration of disparate data sources, ensuring data security, and scalability concerns.
It is the synergy between monitoring systems and technology of data that makes the position of research notable, creating a paradigm shift for the industry. The fusion of Electrical and Computer Engineering (ECE) and AI not only brings better control and predictive capabilities but also the capacity to simulate and optimize processes in a risk-free digital environment which is needed to conduct tests on “what if” scenarios [26]. It amplifies the perks of the monitoring systems and provides a great impact on industry applications, from observing mechanical processes on the factory floors to monitoring big industrial activities [24].
Nevertheless, the translation of the already-developed integrated systems from theory to practice appears to be a problem. Despite significant development, there is a lack of practical real-time prototypes existing currently that combine the specific systems and prove their applicability in different industries [24,25]. This offers a challenging demand for the development of practical, flexible, and cost-effective solutions that can effectively work in different industrial situations to be considered in future research [27]. In addition, we should take into account the security problem of the data, complexity of integration, and scalability problem [28] in order to provide the smooth implementation of the integrated monitoring systems empowered by DT technology [22,29].

2.3. Production Management System

PMSs have become recognized as a much more significant element in the industry and manufacturing sector of today. They carry out an essential function in performing the somewhat complicated tasks of production [4]. These elements are the main things to rely on for planning, scheduling, and sending out production signals. They ensure that resources are used effectively and nothing less than maximum productivity [27]. What is their behavioral formula is the data-driven approach, automation, and an analysis of the results. They support improvements in production that become more effective, speedy, and successful in today’s turbulent markets. The main advantages of these systems are natural resource efficiency, cost savings, product excellence, and timely deliveries. Therefore, they are critical for modern manufacturing [7].
Nevertheless, the inclusion of such systems in the current networked world also poses serious challenges in regard to system integration, cybersecurity, and adaptation to different production zones [29]. A vital research viewpoint is the successive gaps between theoretical development and the actual realization of the scaled ‘Pseudo-mobile work lifestyle’ [30]. Nevertheless, significant advances have been made, yet the scarcity of complete real-life simulated cases of perfect integration of those transition tools, on the industrial level, remains.
The research gap reveals a very strong need: the development of solutions that are practical, flexible, and affordable and can be adjusted to comply with the specified requirements of any industry. In the quest to bridge this gap and fully harvest the capabilities of PMSs in manufacturing, there is a new and astonishing way to achieve this through the application of the DT concept.
DT framework implementation into PMSs is a turning point in which they can meet up. The integration of PMS dynamic capabilities and the real-time tracking, data integration, and analytics of DTs render a new horizon. DTs, which are known for their features such as real-time reflection, interaction, convergence, and self-evolution, can equip the PMSs with exceptional levels of accuracy and adaptability of operation [31].
Imagine a manufacturing environment where the DT of a production asset, be it a machine, production line, or even an entire factory, mirrors its real-world counterpart. This digital replica operates in tandem, offering real-time insights and predictions regarding the physical asset’s behavior, state, performance, and maintenance needs [31,32]. It enables not only enhanced operational control but also predictive maintenance strategies, quality assurance, and optimization in a risk-free digital environment ideal for testing “what if” scenarios.
DTs could be the key to revolutionize PMS production control and optimization. The dynamic rapid changes and uncertainties in manufacturing environments require continuous monitoring. Since DTs are available, the PMS receives an inclusive perspective of the manufacturing asset in real-time when utilizing data [33]. This facilitates rapid and accurate corrections for overall throughput maximization [22].
Moreover, production scheduling, which has always had a hard time coping with the dynamic nature of production as a whole, might finally receive a boost from DTs. In reality, when disturbances happen on the shop floor, DTs responsible for the dynamic modeling and verification of schedules can do so efficiently and ensure resource allocation as well as prevent cascading effects due to unexpected events.
Meanwhile, it is undeniable that we also have several challenges to overcome. Data integration complications and the security and reliability of the integrated systems are the major challenges faced in the infusion of integrated systems into the healthcare industry [34]. To effectively make full use of the benefits of PMSs which have been delimited by DTs, more study and innovations are required [7]. This includes creating scalable, affordable, and flexible technical solutions that can take into consideration these challenges and also provide solutions based on the individual needs of industries [35]. This pillar of innovation challenges is calling us as we move to a future where the interfacing of PMSs and DTs transforms the manufacturing processes to unprecedented levels of efficiency, adaptability, and improvement [36].

3. Materials and Methods

The following section forms the cornerstone of our efforts to enhance mining operations. Here, we describe the following research methodology, as well as discuss the subsequent methods involved in developing the proposed digital twin-based production management system designed for the real-time monitoring and optimization of key performance indicators (KPIs). Data modeling is our starting point, and we use the modeling and improvement principles of a factory. Beginning with a comprehensive survey of the mine value chain, data points are collected, and a system architecture database is explored. To complete the picture, we introduce you to the ARIMA (Autoregressive Integrated Moving Average)-based production forecasting model, which is the key component in the context of our novel approach. The combination of these methods and materials forms the basis for an improved system adapted to increase the efficiency and accuracy of mining decision-making.

3.1. Research Methodology

The research methodology used in this study aims to develop a monitoring system based on a digital twin architecture for production management. This methodology—Figure 3—comprises eight sequential steps, each of which contribute to a comprehensive understanding and refinement of the proposed system. First, an exploratory literature review of the mapping type is carried out to gather evidence and identify existing knowledge gaps. This mapping review aims to provide a comprehensive overview or ‘map’ of the existing literature on a particular topic. This conducted literature review summarizes key characteristics, trends, and themes across a broad range of the literature. It helps to understand the scope of research on a topic and serves as a basis for further investigation or synthesis. Focus groups are then formed with heterogeneous stakeholders from various levels of the open-pit mine workforce, including managers, engineers, and technicians representing diverse disciplines within the craft body responsible for handling the mining operations. These focus groups are convened to discuss and evaluate the findings from the literature review. The adoption of a systems design approach based on the Factory Design and Improvement (FDI) technique is then validated. A survey of the mine value chain is then initiated to gather essential data for further analysis. Based on this baseline data, a system data model is constructed to provide a structured representation of the production environment. The subsequent development of a digital twin-based system framework integrates advanced technologies to enhance production monitoring capabilities. Following this framework’s development, the case study of the proposed system is defined, with the screening station identified as an experimental pilot for evaluation purposes. Finally, the experiments and system integration are carried out by validating the proposed system with a real-time data flow and integrating it with the SCADA system data.

3.2. Data Model

3.2.1. Factory Design and Improvement-Based Data Model

To achieve the aim of flexible integration and interoperability of our designed system, it is relevant to develop a data model assisting the connecting heterogeneity of the DT’s components and the bidirectional data flows to ensure that there is a link with the physical part of the whole cyber–physical system (CPS). In this paper, our data model relies on the FDI Activity Model. Developed by the National Institute of Standards and Technology (NIST), the FDI serves as a foundation framework for our study, as unveiled in the presented diagram in Figure 4. The FDI formalizes essential [30] activities, functions of enterprise software, and crucial information for operational design and management tasks in the context of smart manufacturing systems [37]. It aligns with the standard work processes commonly observed in global manufacturing enterprises, encompassing aspects such as factory operations, manufacturing lines, processes, and equipment operations [30].
At its heart, the ICOM (input, control, output, mechanism) parameters, foundational to the smart manufacturing system (SMS), dictate its operational dynamics. Each component of the ICOM paradigm plays a distinct yet interlinked role in making decisions and performing actions regarding information [34].

3.2.2. Mine Value Chain Parameters Survey

By leveraging the FDI model, we gain a comprehensive overview of the operational processes within the open-pit mine’s value chain [26], encompassing factory operations, manufacturing lines, processes, and equipment. This model empowers us to analyze performance measures, organizational structures, tools, systems, and associated data comprehensively. The FDI model divides the design process into four distinct phases, including development and design requirements, basic design tasks, detailed design tasks, and testing [30,34]. This structured breakdown of design activities has been demonstrated to expedite factory development projects significantly. By following this approach, we conduct a systematic inventory and mapping process to comprehensively document and analyze the facets within the mine value chain according to the ICOM (input, control, output, mechanism) parameters [30] listed in Table 1.
Though the focus of the production value chain survey includes three main stations, namely destoning, screening, and train loading, respectively, the input, control, output, and mechanism parameters within the FDI methodology offer a systematic approach to organizing and managing the operating processes logically. In the input stage, information is the paramount element, addressing the critical elements. This refers to Product Information giving out specifications of the products being manufactured, Market Information presenting insights into the target market for demand, competition, and trends, Resource Information furnishing the demand for tangible and intangible resources, Production Schedule detailing the sequencing and timing of tasks, Labor Information providing workforce details, and Equipment Information indicating the machinery and tools utilized in manufacturing. The output factor, key performance indicators (KPIs), serves as a guide for systems performance metrics such as Cycle Time, Lead Time, Production Output, Work in Process (WIP), and Return on Capital Employed (ROCE). Control mechanisms determine factors like Work Process, which focuses on Product Lifecycle Management (PLM) for systematic product supervision, methodology that highlights Operational Excellence (OpEx), PDCA, which stands for constant improvement, People, involving key personnel roles, and Technology, which emphasizes statistical methods, simulations, and co-simulations for precision. The Mechanism factor involves Equipment and System functionalities such as PLM, CIM, SCADA systems, and ERP, which drive process effectiveness and efficiency [38]. In general, the ICOM structure enables companies to have a roadmap in comprehending, evaluating, and refining complex operational processes that then emphasize the importance of accurate data inputs, performance metrics, control mechanisms, and the use of the latest technological tools in ensuring that operations efficiency and output optimization are achieved.

3.2.3. FDI Data Model Enabling Production Management System

The elucidated data model shown in Figure 5, is vividly evident in the intricate detailing of the overall mine’s value chain. Central to this model is the ‘Factory.Description’ element, anchoring the entire design by synchronizing the attributes and functionalities of the factory with the 3PR (Product, Process, Plant, Resources) approach, a holistic paradigm focusing on Product, Process, Plant, and Resources. The ‘Product’ domain, with its attributes like weight, destination, quality, and profile, underscores the end goals and the quality benchmarks set by the production system. ‘Resources’, a pivotal segment, elucidates the tangible and intangible assets, accentuating the operational efficiencies, equipment reliability, and the strategic deployment of labor. The ‘Process’ section, intertwined with operations such as destoning, screening, and train loading, exemplifies the step-by-step manufacturing mechanics, ensuring that no detail is overlooked. On the other hand, the ‘Plant’ category offers a panoramic view of the facility’s infrastructure, capturing aspects from layout designs to safety standards. With the inclusion of the 3PR approach, the model transcends traditional factory designs, harmonizing product quality, efficient resource allocation, streamlined processes, and optimal plant utilization. In essence, this data model, inspired by NIST’s FDI methodology and enriched by the 3PR approach, presents a visionary overview for a future-ready, operational holistic customized database of our PMS.

3.3. Autoregressive Integrated Moving Average Model

3.3.1. Theoretical Background

The Autoregressive Integrated Moving Average (ARIMA) model stands as a robust statistical tool for the analysis and forecasting of time series data [39]. This model adeptly addresses various structures inherent in time series data, offering a straightforward yet powerful approach for making accurate forecasts [40].
The acronym ARIMA breaks down as follows:
AR (Autoregression): Emphasizing the dependent relationship between an observation and its lagged counterparts.
The Autoregressive (AR) model, which is among the earliest models employed in time series analysis, is a linear model. This model utilizes a combination of past values within a time window, to which an error is added, as illustrated by Equation (1):
X t = i = 0 i = p a i x t 1 + e ( t )
where X t   is the value of the series at time t, p is the order of the model, a i denotes the autoregressive parameters, and e t   represents white noise, denoting the autoregressive model of order p as AR(p).
I (Integrated): Introducing differencing to achieve a stationary time series, mitigating trends and seasonality.
MA (Moving Average): Focusing on the relationship between an observation and the residual error from a moving average model based on lagged observations.
The moving average model, also known as ‘Moving Average (MA)’, is another linear model used for time series forecasting. Unlike the autoregressive model, it is based on the white noise of the series. This model is defined by Equation (2) and is referred to as a moving average of order q, denoted as MA(q):
X t = i = 0 i = p a i e t i + e t
where a i represents the moving average parameters, and en denotes the white noise of the series.
Each component is explicitly defined in the ARIMA model, denoted as ARIMA (p, d, q), where the parameters take integer values to specify the model type. The parameters are elucidated as follows:
p (Lag Order): Signifying the number of lag observations incorporated in the model.
d (Degree of Differencing): Representing the number of times raw observations undergo differencing to achieve stationarity.
q (Order of Moving Average): Indicating the size of the moving average window used in the model.
The ARIMA model is a combination of the two preceding linear models: AR and MA. It also includes an integration term (I) to account for the non-stationarity of time series. The equation for an ARIMA model is represented (3):
d X t = i = 1 p a i X t i + i = 1 q β ε t i + ε t
In constructing the linear regression model, the specified terms are integrated, and the data undergo differencing to attain stationarity, eliminating trends and seasonal structures that may negatively impact the model. It is noteworthy that any of these parameters can be set to 0, allowing the ARIMA model to emulate simpler models like ARMA, AR, I, or MA.
The adoption of an ARIMA model for time series analysis assumes that the underlying process generating the observations follows an ARIMA process. This assumption underscores the importance of validating the model’s assumptions against both the raw observations and the residual errors of the forecasts [40,41].
To implement the ARIMA model in Python, the next step involves loading a simple univariate time series, initiating the practical application of the theoretical foundations discussed above.
The next diagram of Figure 6 illustrates the implementation workflow [42].

3.3.2. Model Performance Metrics

We use different performance metrics to check how well our ML model performs on new, unseen data by comparing their predictions to the actual outcomes in a test dataset. We evaluate the suggested models using three key metrics [43]: mean absolute error (MAE), root mean squared error (RMSE), and mean absolute percentage error (MAPE). The specific formulas for these metrics are given by Equations (4)–(6).
  • Mean Absolute Error
The mean absolute error (MAE) is a measure utilized to evaluate the discrepancies between paired observations that pertain to the same event. The computation of MAE involves the multiplication of the subsequent formula as depicted in Equation (4):
M A E = 1 n i = 1 n y y
The mean absolute error is calculated on the same scale as the data. However, since this accuracy metric is dependent on the scale, it is not suitable for comparing series with different scales. In the realm of time series analysis, the mean absolute error is commonly employed to gauge the forecast error.
  • Root Mean Squared Error
The root mean square error (RMSE) is a common way to measure how much the predicted values from a model differ from the actual observed values. It is important to note that RMSE depends on the scale of the data, making it more suitable for evaluating forecasting errors within a specific dataset rather than comparing different datasets.
The equation for the RMSE is represented (5):
R M S E = 1 n i = 1 n ( y y ) ²
  • Mean Absolute Percentage Error
The mean absolute percentage error (MAPE) is a metric used to assess the accuracy of a forecasting model by measuring the percentage difference between predicted and observed values, providing a relative measure of prediction accuracy.
The equation for the MAPE is represented (6):
M A P E = 1 n i = 1 n y y y   × 100

4. Scalable Compositional Digital Twin-Based Production Management System for Real-Time Monitoring and KPI Optimization in Mining Operations

The proposed Scalable DT-based PMS for real-time monitoring and KPI optimization in mining operations (Figure 7) introduces a groundbreaking approach to enhancing the efficiency and effectiveness of mining operations [44]. By seamlessly integrating edge computing, cloud computing, and artificial intelligence, the framework effectively gathers, analyzes, and visualizes data from various sources to create a comprehensive digital representation of the mining process. This virtual mirror, known as the DT, serves as a powerful tool for real-time monitoring, predictive production, and optimization of key performance indicators (KPIs).
The system efficiently manages the high volume of data generated by mining equipment through edge computing. The programmable logic controllers Schneider Modicon 340 and local control panels also known as human machine interface Schneider Magelis are commonly positioned at the mining site to collect raw data from sensors, data loggers, and smart meters. These data are then transmitted to the multi-controller server OPC Factory Server (OFS) via Modbus TCP protocol between dispersed edge control devices (data logger, smart meters, relays, etc.).
Once collected, the data are logged onto the multi-controller Server Data Object File System (OFS). The SCADA system then utilizes these data for monitoring and SCADA functionalities, e.g., real-time visualization and monitoring [44]. Subsequently, the data are stored in the backbone of our system, the PostgreSQL-pgAdmin database, which serves as a centralized repository for both historical and current data. This centralized repository facilitates efficient data retrieval and analysis for various purposes.
To prepare the data for further analysis, the framework lies on the bloc named Data Preprocessing, which performs multiple Python libraries, such as Numpy, Pandas, Seaborn, and Matplot. This involves cleaning and formatting the data, removing outliers, missing values and inconsistencies, and ensuring data compatibility with the DT and predictive models. Furthermore, the preprocessed data come to the prediction stage, within the milestone component that revolves around leveraging advanced ML functionalities, particularly those embedded in the Auto Regression Integrated Moving Average (ARIMA) algorithm, where this block delves into the intricate mechanisms of the ML model that relies on ARIMA, utilizing historical data collected from diverse mining fields. This integration embodies a forward-looking approach that aligns with the industry’s drive toward efficient production prediction. The advantages of the ARIMA model are presented in Section 3.3.

5. Experiment and Results

5.1. Data Description

The dataset under consideration originates from real-time measurements collected at the phosphate screening station over the course of 20 months, starting from 1 January 2021 to 31 August 2022. Specifically, the data pertain to the tonnage of phosphate that has been screened and is in a wet state upon exiting the screening station. The information is sourced directly from the mining site, with measurements taken daily and in real-time.
The flow of the operational process of the screen station is inextricably connected to the data collection process. A built-in weighing scale, consisting of the conveyor belt system, registers the tonnage of the phosphate that goes through the screen. These measurements are taken in intervals to specifically correspond with the end of each working shift. This total will provide an overall daily metric that is produced by the summation of the tonnage from all three shifts.
Figure 8 shows that the dataset includes the daily tonnage of wet phosphate as it is screened and monitored over 20 months. The temporal format of the dataset corresponds with the changes and operations cycles, providing a useful tool for analyzing and forecasting the trends of phosphorite production.

5.2. ARIMA Model Training

5.2.1. Time Series Stationarity

During this stage, a thorough analysis of the time series behavior is conducted to extract essential information for the model’s creation. According to the model training workflow depicted in Figure 6, the process initiates with visualizing the series, followed by an examination of its stationarity and the identification of the parameters [41].
After that, we visualized the time series plot (Figure 8), where we employed the augmented Dickey–Fuller (ADF) test. The result in Table 2 reveals an ADF metric that dictates the stationarity of the series with an important low p-value of 0; all the other ADF metrics are shown in Table 2.

5.2.2. ARIMA Parameters’ Identification

In the subsequent phase, our attention turns to the crucial task of identifying the parameters p, d, and q for the ARIMA model. To achieve this, our initial step involves plotting the Autocorrelation Function (ACF) and Partial Autocorrelation Function (PACF). As depicted in Figure 9, a visual analysis of these plots guides us in estimating the values for p and q. Specifically, the ACF aids in estimating the Moving Average (q) part, while the PACF assists in estimating the Autoregressive (p) part.
Upon careful examination of Figure 9, a potential ARIMA candidate emerges as (1,0,1). This determination is rooted in the PACF plot, where the upper confidence interval is crossed at the first lag (p = 1), and the ACF plot similarly exhibits a crossing at the first lag (q = 1). With our time series confirmed as stationary (d = 0), this configuration presents itself as promising. In light of the stationarity, we initially set d = 0. Subsequent exploration involves testing alternative integer values in proximity to zero, encompassing 1 and 2, to identify the optimal differencing parameter.
Following this, we find that the ARIMA model (1,2,1) provides the most suitable results through the comparison of different combinations, (1,0,1), (1,1,1), and (1,2,1), which have the lowest errors. This procedure once more emphasizes the role of parametric studies, which in turn stresses the importance of choosing the right model in improving the model’s performance.
While the visual analysis carries some subjectivity and more than one ARIMA parametric combination may be plausible, further diagnostic tools are used to improve our choice of the forecast model. That is when the Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC) become important instruments to use. These criteria help in the quantitative assessment of various parameter settings, and this aids in optimization.
  • Akaike Information Criterion (AIC)
The AIC assesses both the appropriateness of a model to the data and its overall complexity. Typically, the selected model is the one with the lowest AIC value, indicating a balance between model fit and complexity.
A I C = log L   + 2 k
where L is the likelihood function, and k is the number of model parameters.
  • Bayesian Information Criterion (BIC) (8)
B I C = log L + K l n T
where L is the likelihood function, k is the number of parameters, and T is the number of observations. The AIC and BIC scores, as defined above, are minimized. The model and parameters p, d, and q chosen are those that minimize these criteria.
In order to calculate the AIC and BIC metrics, we follow the algorithm that is presented below—Algorithm 1. We first set the range of possible values for p, d, and q, these being integers, as the starting point. The subsequent algorithm applies an iterative search over all possible combinations contained within the predefined range. For each running of the algorithm, the AIC and BIC criterion values are calculated, and the one with the lowest score is selected. This detailed strategy ensures the quality of the parameter combinations optimization and fitting, and finally the identification of the ideal combination that satisfies both the AIC and BIC criteria.
Algorithm 1: AIC and BIC Calculation Algorithm
1 → Load the dataset
2 → Define a range for p, d and q values
      p = range (0, 3)
      d = range (0, 3)
      q = range (0, 3)
3 → Initialize minimum AIC and minimum BIC as infinity
4 → Initialize best parameters as (0, 0, 0)
5 → for every combination of p, d, and q values Do:
      Fit the ARIMA model with the current combination
      Calculate AIC for the model
      Calculate BIC for the model
      if current AIC value is lower then
      Update the minimum AIC for best parameters.
     end if
      if current BIC value is lower then
      Update the minimum BIC for best parameters.
     end if
    end for
6 → Print the best parameters identified based on both AIC and BIC
The computation performed with the Python code that encloses the supplied Algorithm 1 returns the optimal parameters based on the Akaike Information Criterion (AIC) are (1,2,2), with a criterion value of 11,436.483. Further, for BIC (Bayesian Information Criterion), the optimal parameters are (1,1,1), with the associated criterion value being 11,451.199.
During the choosing of our parameters, we put our visual assortment, derived from visual analysis, to severe checkup. This entails evaluating both theoretically proposed combinations as well as the ones found through visual observation. The integrated approach combines both theoretical and empirical aspects to generate a robust and comprehensive model development protocol.
The Table 3 presents an overall comparison of a plethora of options that our ARIMA model has to offer. The framework of our comparative analysis presents both empirical and theoretical considerations. Specifically, the table encompasses three distinct approaches: the first is the visual analysis, the second is via AIC, and the third is the BIC. The comparison is organized based on four critical criteria: mean absolute error (MAE), mean absolute percentage error (MAPE), root mean squared error (RMSE), and criterion value that comes from the combination of them. On the other hand, the table also indicates that the smallest MAPE, 0.35, is for the first combination, (1,2,1). It must be pointed out that the values of MAE and RMSE are virtually the same for the first and the third combination, suggesting that an ideal lower value is achieved. Thus, we select configuring our model with the parameter swap combinations (1,2,1) which appear to be the best as they result in the most effective ML-based model.

5.2.3. Dataset Splitting

To build our predictive model, we need to split our dataset into two main parts: the training set, which is 80% of the dataset and is used for teaching the model, and the test set, which is 20% of the dataset and is held back to evaluate the model. To create a high-performance model, we specify six different splits from our dataset. This will ensure complete training and foolproof testing of the model, thus giving a good insight into how it works in different scenarios. These six splits -shown in Table 4- constitute what we know as a time series cross-validator, which is a model that increases effectiveness by combining different training and test datasets into consideration. This reveals that the analysis of the model will be comprehensive and insightful.

5.3. Model Evaluation

5.3.1. Performance Metrics for ARIMA Model Predictions

To examine the efficiency of the ARIMA model, we utilize a set of performance metrics for a comparison of how good or relevant our predictions are. Through this method of evaluating the model, the predictions from the model are compared to the actual results from the test dataset, thus providing a comparative analysis that is used in determining real-world performance. These three basic metrics are the ones that matter the most—MAE, RMSE, and MAPE.
As depicted in Table 3, our ARIMA model’s performance is summarized with key metrics: MAE (3553.32), MAPE (0.35), and RMSE (4386.69). These results remain promising and intriguing when compared to the statistical metrics of our dataset, specifically the maximum, minimum, mean, and standard deviation values (Table 5).
For a clearer understanding of the model’s performance, we provide a graphical representation (Figure 10) that compares the predicted values from the ARIMA model with the actual values in the test dataset. This visual depiction not only evaluates accuracy but also offers insights into the model’s ability to capture trends and patterns in the time series.

5.3.2. Inference and Model Validation with External Time Series

As we go into the stage of monitoring how our ARIMA model works and if it can be processed better, we reach the point of introducing a time series that comes from an external source and that is not limited to the time parameter of our original dataset. The following critical procedure involves using a dataset with a three-month time series, from 1 January to 31 March 2023. Also, the other data anchor emerges from the same screening station as the initial wet phosphate samples. We will test the model, therefore, by carrying out inference passing, a rigorous evaluation of its accuracy in a separate time frame. This attempt seeks to make an overall evaluation, identifying its potential to adjust to the changing environments and the ability to predict under temporal conditions.
Based on Figure 11, it can be seen that our ARIMA model very well encompasses the temporal patterns and well captures the trends within the time series data for the verification part. The alignment of model performance and reliability herein points to the efficiency and trustworthiness of our built ARIMA-based model. Additionally, our building ARIMA model has a satisfactory performance in the evaluation metrics, with MAE, RMSE, and MAPE values of 4185.95, 5092.9, and 0,25. These parameters overall provide evidence of the model’s performance in terms of precision and closeness of the forecasts to the observed values.

5.4. Proposed Production Management System-Based Digital Twin Implementation

5.4.1. Implementation Setup

We begin our dedicated deployment procedure by gathering time series data from our primary database. Through a real-time link, the SCADA monitoring system updates these data on a minute basis. By filling in the missing values and combining the daily volume of screened phosphates from three work shifts, we systematically preprocess these data. Secondly, we use an ML-based model to perform the prediction. An important cornerstone of our system is Flask [45], a flexible web framework which is a very important element in supporting the hosting of our PMS application. Acting as a bridge, Flask provides advanced functionalities for processing HTTP (hypertext transfer protocol) requests, routes, and views. It enables users to access our application through a web browser with ease [46]. Moreover, Flask interfaces effortlessly with our graphical interfaces, which are based on Streamlit. This comprehensive deployment ensures visualization and interactivity within our DT application. Thereafter, we link the developed interfaces of our PMS into the SCADA supervision main views displayed in the control room associated with the screening station, as illustrated in Figure 12. This combined configuration will develop a next-generation monitoring system that enables SCADA functionalities, such as ensuring the availability and continuous supervision of performance and quality, known as Overall Equipment Effectiveness (OEE). Additionally, it provides an outlook of the expected production, including the tonnage of screened phosphate output from our screening station. Furthermore, it empowers operators to reconcile estimated/expected production with realized performance.

5.4.2. Production Management System Digital Dashboard

Our PMS has an integrated digital dashboard, which is the most crucial element contacting the system framework-based DT. Its main emphasis lies in the control of the monitoring activities of the screening station. The digital dashboard contains a suite of functionality such as real-time monitoring [47,48], Overall Equipment Effectiveness (OEE) [49], and graphical analytics that depict the historical trends and patterns of the screening station. The entire package of these features allows for a complete picture of the station functioning, as well as more effective decision-making. Moreover, a forecasting function, for the screening of wet phosphate production, is incorporated to increase its capabilities. The new system will be based on an ML model that uses its history data from the station [50]. The forecasting function gives a chance to view the estimated output and device operation effectiveness and hence enable our production management [47,48,51].
Figure 13 shows various operational aspects of the proposed system. Figure 13a presents the homepage, revealing the useful aspects of production in real-time. The visual elements include the graphics representing the tonnage of the product and the phosphates produced from each hopper of the screening station. Secondly, the pie chart section exhibits the uptime hours and downtime hours as a dedicated segment, presenting a comprehensive categorization of the anomalies (mechanical issues, electrical issues). This figure allows one to have a complete illustration of the screening station’s performance level. As seen in Figure 13b, the milestone in our prototype PMS comes with a forecasting function that offers us a real-time monitoring capability. This is a function that reconciles the values in production. This is achieved by comparing the estimated and realized values. In essence, it enables the supervision of operators in the decision-making process by means of a gap and difference measurement. The integrity of this function is vital for the performance of operators to monitor and react accordingly to the various production scenarios, thus allowing for more informed and strategic solutions.

6. Conclusions and Outlook

DT technology is emerging as an invaluable tool used for gaining a comprehensive understanding of and proactively anticipating potential scenarios that physical assets may encounter. Despite numerous platforms outlined in the literature for DT development, the emphasis has primarily been on specific vertical contexts, such as multi-layered structures and architectures [52,53]. A significant evolution is imperative to achieve efficient and compositional DTs. Addressing the current needs in this domain, there is a demand for preferably open scalable platforms for the development and composition of DTs. This paper contributes to the existing literature by introducing a compositional DT framework designed to replicate the actual tonnage produced by the phosphate screening station. This framework enhances real-time monitoring capabilities through a systematic PMS, facilitating the forecasting of production. The proposed system provides some valuable information and suggestions for the operators; thus, the operators can operate the system smoothly and manage the production at screening stations well. The DT provides a digital dashboard designed to enable operators to have a complete status of machines, production time, Overall Equipment Effectiveness (OEE), and order scheduling. We propose an integrable framework for composing DT suits for different situations. DT is self-contained and works with AI-based models, particularly through ARIMA in ML. Streamlit makes this possible for DT through analytics and trends tracking. It is interactive and visualizes the IoTs monitoring data in a real-time manner, which are simple and complex. Besides this, the forecasting tool offers information that supports decision-making and helps in the process of reconciling actual and forecasted values of the stations.
To show the platform’s usefulness, we designed a case for the production line process in the mining industry. This stage is aimed at screening phosphate and monitoring/predicting its tonnage at the station’s output. DT shows the status of the station to the end user in a direct way by using the SCADA composite in the framework. Further, they are offered manufacturing KPIs which are easy to view on the related digital dashboard interface of our DT.
In our future growth plans for our proposed composition framework, we aim to expand its scope to include the entire mine value chain, including destoning and train loading stations, within the production management system (PMS). This expansion will enable a comprehensive approach to mine operations’ management. We are also committed to enhancing the capabilities of the ML-based framework by increasing the role of ML algorithms in the integration processes. This evolution will go beyond predictive maintenance and shift scheduling optimization, enriching the functionality of the system. Our overall goal is to establish a robust and fully functional PMS that comprehensively addresses all essential parameters related to production efficiency. By achieving this goal, we aim to create a robust structure that optimizes operations throughout the mine value chain. To ensure a clear and comprehensive understanding of our objectives, strategies, and expected benefits, we recognize the need to improve the description of the content. We are committed to refining our documentation to provide stakeholders with a more transparent insight into the intended outcomes and benefits of implementing these new capabilities.

Author Contributions

Conceptualization, N.E.B.; methodology, N.E.B., N.G. and M.M.; software, N.E.B.; validation, M.M. and A.C.; formal analysis, O.L. and A.E.M.; investigation, N.D.; resources, N.E.B. and N.D.; data curation, N.E.B., N.D. and O.L.; writing—original draft preparation, N.E.B.; writing—review and editing, N.G. and A.E.M.; visualization, N.E.B.; supervision, O.L.; project administration, M.M.; funding acquisition, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Green Tech Institute of UM6P and the Experimental Mine program (MineEx), a research and innovation program led by Mohammed VI Polytechnic University (UM6P) and OCP Benguerir.

Data Availability Statement

Data are contained within the article.

Acknowledgments

During the preparation of this manuscript, the author utilized ChatGPT to actively improve the grammar, coherence, and readability of its sections.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, K.-J.; Lee, Y.-H.; Angelica, S. Digital Twin Design for Real-Time Monitoring—A Case Study of Die Cutting Machine. Int. J. Prod. Res. 2021, 59, 6471–6485. [Google Scholar] [CrossRef]
  2. Shangguan, D.; Chen, L.; Ding, J. A Digital Twin-Based Approach for the Fault Diagnosis and Health Monitoring of a Complex Satellite System. Symmetry 2020, 12, 1307. [Google Scholar] [CrossRef]
  3. Elbazi, N.; Tigami, A.; Laayati, O.; Maghraoui, A.E.; Chebak, A.; Mabrouki, M. Digital Twin-Enabled Monitoring of Mining Haul Trucks with Expert System Integration: A Case Study in an Experimental Open-Pit Mine. In Proceedings of the 2023 5th Global Power, Energy and Communication Conference (GPECOM), Cappadocia, Turkiye, 14 June 2023; IEEE: Nevsehir, Turkiye; pp. 168–174. [Google Scholar]
  4. Slob, N.; Hurst, W. Digital Twins and Industry 4.0 Technologies for Agricultural Greenhouses. Smart Cities 2022, 5, 1179–1192. [Google Scholar] [CrossRef]
  5. El Bazi, N.; Mabrouki, M.; Laayati, O.; Ouhabi, N.; El Hadraoui, H.; Hammouch, F.-E.; Chebak, A. Generic Multi-Layered Digital-Twin-Framework-Enabled Asset Lifecycle Management for the Sustainable Mining Industry. Sustainability 2023, 15, 3470. [Google Scholar] [CrossRef]
  6. Elbazi, N.; Mabrouki, M.; Chebak, A.; Hammouch, F. Digital Twin Architecture for Mining Industry: Case Study of a Stacker Machine in an Experimental Open-Pit Mine. In Proceedings of the 2022 4th Global Power, Energy and Communication Conference (GPECOM), Cappadocia, Turkiye, 14 June 2022; IEEE: Nevsehir, Turkiye; pp. 232–237. [Google Scholar]
  7. Popescu, D.; Dragomir, M.; Popescu, S.; Dragomir, D. Building Better Digital Twins for Production Systems by Incorporating Environmental Related Functions—Literature Analysis and Determining Alternatives. Appl. Sci. 2022, 12, 8657. [Google Scholar] [CrossRef]
  8. Vodyaho, A.I.; Zhukova, N.A.; Shichkina, Y.A.; Anaam, F.; Abbas, S. About One Approach to Using Dynamic Models to Build Digital Twins. Designs 2022, 6, 25. [Google Scholar] [CrossRef]
  9. De Benedictis, A.; Flammini, F.; Mazzocca, N.; Somma, A.; Vitale, F. Digital Twins for Anomaly Detection in the Industrial Internet of Things: Conceptual Architecture and Proof-of-Concept. IEEE Trans. Ind. Inform. 2023, 19, 11553–11563. [Google Scholar] [CrossRef]
  10. Torzoni, M.; Tezzele, M.; Mariani, S.; Manzoni, A.; Willcox, K.E. A Digital Twin Framework for Civil Engineering Structures. Comput. Methods Appl. Mech. Eng. 2024, 418, 116584. [Google Scholar] [CrossRef]
  11. Kibira, D.; Shao, G.; Venketesh, R. Building A Digital Twin of AN Automated Robot Workcell. In Proceedings of the 2023 Annual Modeling and Simulation Conference (ANNSIM), Hamilton, ON, Canada, 23–26 May 2023; IEEE: New York, NY, USA, 2023; pp. 196–207. [Google Scholar]
  12. Vilarinho, S.; Lopes, I.; Sousa, S. Design Procedure to Develop Dashboards Aimed at Improving the Performance of Productive Equipment and Processes. Procedia Manuf. 2017, 11, 1634–1641. [Google Scholar] [CrossRef]
  13. Fathy, Y.; Jaber, M.; Nadeem, Z. Digital Twin-Driven Decision Making and Planning for Energy Consumption. J. Sens. Actuator Netw. 2021, 10, 37. [Google Scholar] [CrossRef]
  14. Piras, G.; Agostinelli, S.; Muzi, F. Digital Twin Framework for Built Environment: A Review of Key Enablers. Energies 2024, 17, 436. [Google Scholar] [CrossRef]
  15. Kandavalli, S.R.; Khan, A.M.; Iqbal, A.; Jamil, M.; Abbas, S.; Laghari, R.A.; Cheok, Q. Application of Sophisticated Sensors to Advance the Monitoring of Machining Processes: Analysis and Holistic Review. Int. J. Adv. Manuf. Technol. 2023, 125, 989–1014. [Google Scholar] [CrossRef]
  16. Balogh, M.; Földvári, A.; Varga, P. Digital Twins in Industry 5.0: Challenges in Modeling and Communication. In Proceedings of the NOMS 2023-2023 IEEE/IFIP Network Operations and Management Symposium, Miami, FL, USA, 8 May 2023; IEEE: Miami, FL, USA; pp. 1–6. [Google Scholar]
  17. Zhao, Y.; Yan, L.; Wu, J.; Song, X. Design and Implementation of a Digital Twin System for Log Rotary Cutting Optimization. Future Internet 2024, 16, 7. [Google Scholar] [CrossRef]
  18. Peng, A.; Ma, Y.; Huang, K.; Wang, L. Digital Twin-Driven Framework for Fatigue Life Prediction of Welded Structures Considering Residual Stress. Int. J. Fatigue 2024, 181, 108144. [Google Scholar] [CrossRef]
  19. Sifat, M.M.H.; Choudhury, S.M.; Das, S.K.; Ahamed, M.H.; Muyeen, S.M.; Hasan, M.M.; Ali, M.F.; Tasneem, Z.; Islam, M.M.; Islam, M.R.; et al. Towards Electric Digital Twin Grid: Technology and Framework Review. Energy AI 2023, 11, 100213. [Google Scholar] [CrossRef]
  20. Singh, S.K.; Kumar, M.; Tanwar, S.; Park, J.H. GRU-Based Digital Twin Framework for Data Allocation and Storage in IoT-Enabled Smart Home Networks. Future Gener. Comput. Syst. 2024, 153, 391–402. [Google Scholar] [CrossRef]
  21. El Hadraoui, H.; Laayati, O.; Guennouni, N.; Chebak, A.; Zegrari, M. A Data-Driven Model for Fault Diagnosis of Induction Motor for Electric Powertrain. In Proceedings of the 2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON), Palermo, Italy, 14–16 June 2022; pp. 336–341. [Google Scholar]
  22. Onaji, I.; Tiwari, D.; Soulatiantork, P.; Song, B.; Tiwari, A. Digital Twin in Manufacturing: Conceptual Framework and Case Studies. Int. J. Comput. Integr. Manuf. 2022, 35, 831–858. [Google Scholar] [CrossRef]
  23. Allam, Z.; Sharifi, A.; Bibri, S.E.; Jones, D.S.; Krogstie, J. The Metaverse as a Virtual Form of Smart Cities: Opportunities and Challenges for Environmental, Economic, and Social Sustainability in Urban Futures. Smart Cities 2022, 5, 771–801. [Google Scholar] [CrossRef]
  24. Asad, U.; Khan, M.; Khalid, A.; Lughmani, W.A. Human-Centric Digital Twins in Industry: A Comprehensive Review of Enabling Technologies and Implementation Strategies. Sensors 2023, 23, 3938. [Google Scholar] [CrossRef]
  25. Zhu, Z.; Liu, C.; Xu, X. Visualisation of the Digital Twin Data in Manufacturing by Using Augmented Reality. Procedia CIRP 2019, 81, 898–903. [Google Scholar] [CrossRef]
  26. Elbazi, N.; Hadraoui, H.E.; Laayati, O.; Maghraoui, A.E.; Chebak, A.; Mabrouki, M. Digital Twin in Mining Industry: A Study on Automation Commissioning Efficiency and Safety Implementation of a Stacker Machine in an Open-Pit Mine. In Proceedings of the 2023 5th Global Power, Energy and Communication Conference (GPECOM), Cappadocia, Turkiye, 14 June 2023; pp. 548–553. [Google Scholar]
  27. Mohammed, K.; Abdelhafid, M.; Kamal, K.; Ismail, N.; Ilias, A. Intelligent Driver Monitoring System: An Internet of Things-Based System for Tracking and Identifying the Driving Behavior. Comput. Stand. Interfaces 2023, 84, 103704. [Google Scholar] [CrossRef]
  28. Choi, S.; Woo, J.; Kim, J.; Lee, J.Y. Digital Twin-Based Integrated Monitoring System: Korean Application Cases. Sensors 2022, 22, 5450. [Google Scholar] [CrossRef] [PubMed]
  29. Bendaouia, A.; Abdelwahed, E.H.; Qassimi, S.; Boussetta, A.; Benzakour, I.; Amar, O.; Hasidi, O. Artificial Intelligence for Enhanced Flotation Monitoring in the Mining Industry: A ConvLSTM-Based Approach. Comput. Chem. Eng. 2024, 180, 108476. [Google Scholar] [CrossRef]
  30. Choi, S.; Kang, G.; Jung, K.; Kulvatunyou, B.; Morris, K. Applications of the Factory Design and Improvement Reference Activity Model. In Advances in Production Management Systems. Initiatives for a Sustainable World; Nääs, I., Vendrametto, O., Mendes Reis, J., Gonçalves, R.F., Silva, M.T., Von Cieminski, G., Kiritsis, D., Eds.; IFIP Advances in Information and Communication Technology; Springer International Publishing: Cham, Switzerland, 2016; Volume 488, pp. 697–704. ISBN 978-3-319-51132-0. [Google Scholar]
  31. Saihi, A.; Awad, M.; Ben-Daya, M. Quality 4.0: Leveraging Industry 4.0 Technologies to Improve Quality Management Practices—A Systematic Review. Int. J. Qual. Reliab. Manag. 2021, 40, 628–650. [Google Scholar] [CrossRef]
  32. Alexopoulos, K.; Tsoukaladelis, T.; Dimitrakopoulou, C.; Nikolakis, N.; Eytan, A. An Approach towards Zero Defect Manufacturing by Combining IIoT Data with Industrial Social Networking. Procedia Comput. Sci. 2023, 217, 403–412. [Google Scholar] [CrossRef]
  33. Foit, K. Agent-Based Modelling of Manufacturing Systems in the Context of “Industry 4.0.”. J. Phys. Conf. Ser. 2022, 2198, 012064. [Google Scholar] [CrossRef]
  34. Jung, K.; Choi, S.; Kulvatunyou, B.; Cho, H.; Morris, K.C. A Reference Activity Model for Smart Factory Design and Improvement. Prod. Plan. Control 2017, 28, 108–122. [Google Scholar] [CrossRef]
  35. Zayed, S.M.; Attiya, G.M.; El-Sayed, A.; Hemdan, E.E.-D. A Review Study on Digital Twins with Artificial Intelligence and Internet of Things: Concepts, Opportunities, Challenges, Tools and Future Scope. Multimed. Tools Appl. 2023, 82, 47081–47107. [Google Scholar] [CrossRef]
  36. Choi, S.; Jun, C.; Zhao, W.B.; Do Noh, S. Digital Manufacturing in Smart Manufacturing Systems: Contribution, Barriers, and Future Directions. In Advances in Production Management Systems: Innovative Production Management Towards Sustainable Growth; Umeda, S., Nakano, M., Mizuyama, H., Hibino, H., Kiritsis, D., Von Cieminski, G., Eds.; IFIP Advances in Information and Communication Technology; Springer International Publishing: Cham, Switzerland, 2015; Volume 460, pp. 21–29. ISBN 978-3-319-22758-0. [Google Scholar]
  37. Choi, S.; Kim, B.H.; Do Noh, S. A Diagnosis and Evaluation Method for Strategic Planning and Systematic Design of a Virtual Factory in Smart Manufacturing Systems. Int. J. Precis. Eng. Manuf. 2015, 16, 1107–1115. [Google Scholar] [CrossRef]
  38. Liu, J.; Ji, Q.; Zhang, X.; Chen, Y.; Zhang, Y.; Liu, X.; Tang, M. Digital Twin Model-Driven Capacity Evaluation and Scheduling Optimization for Ship Welding Production Line. J. Intell. Manuf. 2023, 34. [Google Scholar] [CrossRef]
  39. Yadav, R.S.; Mehta, V.; Tiwari, A. An Application of Time Series ARIMA Forecasting Model for Predicting Nutri Cereals Area in India. 2022. Available online: https://www.thepharmajournal.com/archives/2022/vol11issue3S/PartQ/S-11-3-85-221.pdf (accessed on 27 January 2024).
  40. Ning, Y.; Kazemi, H.; Tahmasebi, P. A Comparative Machine Learning Study for Time Series Oil Production Forecasting: ARIMA, LSTM, and Prophet. Comput. Geosci. 2022, 164, 105126. [Google Scholar] [CrossRef]
  41. Fan, D.; Sun, H.; Yao, J.; Zhang, K.; Yan, X.; Sun, Z. Well Production Forecasting Based on ARIMA-LSTM Model Considering Manual Operations. Energy 2021, 220, 119708. [Google Scholar] [CrossRef]
  42. Implementation of Time Series Forecasting with Box Jenkins ARIMA Method on Wood Production of Indonesian Forests|AIP Conference Proceedings|AIP Publishing. Available online: https://pubs.aip.org/aip/acp/article-abstract/2738/1/060004/2894351/Implementation-of-time-series-forecasting-with-Box?redirectedFrom=fulltext (accessed on 29 January 2024).
  43. El Maghraoui, A.; Ledmaoui, Y.; Laayati, O.; El Hadraoui, H.; Chebak, A. Smart Energy Management: A Comparative Study of Energy Consumption Forecasting Algorithms for an Experimental Open-Pit Mine. Energies 2022, 15, 4569. [Google Scholar] [CrossRef]
  44. Pajpach, M.; Pribiš, R.; Drahoš, P.; Kučera, E.; Haffner, O. Design of an Educational-Development Platform for Digital Twins Using the Interoperability of the OPC UA Standard and Industry 4.0 Components. In Proceedings of the 2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), Tenerife, Spain, 19 July 2023; IEEE: Tenerife, Spain; pp. 1–6. [Google Scholar]
  45. Mufid, M.R.; Basofi, A.; Al Rasyid, M.U.H.; Rochimansyah, I.F.; Rokhim, A. Design an MVC Model Using Python for Flask Framework Development. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 214–219. [Google Scholar]
  46. Srikanth, G.; Reddy, M.S.K.; Sharma, S.; Sindhu, S.; Reddy, R. Designing a Flask Web Application for Academic Forum and Faculty Rating Using Sentiment Analysis. AIP Conf. Proc. 2023, 2477, 030035. [Google Scholar] [CrossRef]
  47. Schulze, A.; Brand, F.; Geppert, J.; Böl, G.-F. Digital Dashboards Visualizing Public Health Data: A Systematic Review. Front. Public Health 2023, 11, 999958. [Google Scholar] [CrossRef] [PubMed]
  48. Gonçalves, C.T.; Gonçalves, M.J.A.; Campante, M.I. Developing Integrated Performance Dashboards Visualisations Using Power BI as a Platform. Information 2023, 14, 614. [Google Scholar] [CrossRef]
  49. Laayati, O.; El Hadraoui, H.; El Magharaoui, A.; El-Bazi, N.; Bouzi, M.; Chebak, A.; Guerrero, J.M. An AI-Layered with Multi-Agent Systems Architecture for Prognostics Health Management of Smart Transformers: A Novel Approach for Smart Grid-Ready Energy Management Systems. Energies 2022, 15, 7217. [Google Scholar] [CrossRef]
  50. Islam, M.A.; Sufian, M.A. Employing AI and ML for Data Analytics on Key Indicators: Enhancing Smart City Urban Services and Dashboard-Driven Leadership and Decision-Making. In Technology and Talent Strategies for Sustainable Smart Cities; Singh Dadwal, S., Jahankhani, H., Bowen, G., Yasir Nawaz, I., Eds.; Emerald Publishing Limited: Leeds, UK, 2023; pp. 275–325. ISBN 978-1-83753-023-6. [Google Scholar]
  51. Jwo, J.-S.; Lin, C.-S.; Lee, C.-H. An Interactive Dashboard Using a Virtual Assistant for Visualizing Smart Manufacturing. Mob. Inf. Syst. 2021, 2021, e5578239. [Google Scholar] [CrossRef]
  52. Honghong, S.; Gang, Y.; Haijiang, L.; Tian, Z.; Annan, J. Digital Twin Enhanced BIM to Shape Full Life Cycle Digital Transformation for Bridge Engineering. Autom. Constr. 2023, 147, 104736. [Google Scholar] [CrossRef]
  53. Li, W.; Li, Y.; Garg, A.; Gao, L. Enhancing Real-Time Degradation Prediction of Lithium-Ion Battery: A Digital Twin Framework with CNN-LSTM-Attention Model. Energy 2024, 286, 129681. [Google Scholar] [CrossRef]
Figure 1. Integration level of digital twin.
Figure 1. Integration level of digital twin.
Designs 08 00040 g001
Figure 2. MoSCoW Diagram showing the attributes and features of digital twin.
Figure 2. MoSCoW Diagram showing the attributes and features of digital twin.
Designs 08 00040 g002
Figure 3. Followed research methodology.
Figure 3. Followed research methodology.
Designs 08 00040 g003
Figure 4. Factory Design and Improvement (FDI) reference activity model.
Figure 4. Factory Design and Improvement (FDI) reference activity model.
Designs 08 00040 g004
Figure 5. FDI-based data model enabling production management system.
Figure 5. FDI-based data model enabling production management system.
Designs 08 00040 g005
Figure 6. ARIMA model training flow.
Figure 6. ARIMA model training flow.
Designs 08 00040 g006
Figure 7. Scalable compositional digital twin-based production management system framework.
Figure 7. Scalable compositional digital twin-based production management system framework.
Designs 08 00040 g007
Figure 8. Screened tonnage of phosphates time series.
Figure 8. Screened tonnage of phosphates time series.
Designs 08 00040 g008
Figure 9. Plots of Autocorrelation Function and Partial Autocorrelation Function: (a) Autocorrelation Function; (b) Partial Autocorrelation Function.
Figure 9. Plots of Autocorrelation Function and Partial Autocorrelation Function: (a) Autocorrelation Function; (b) Partial Autocorrelation Function.
Designs 08 00040 g009
Figure 10. Original and predicted times series of screened wet phosphate.
Figure 10. Original and predicted times series of screened wet phosphate.
Designs 08 00040 g010
Figure 11. ARIMA model-based validation within external time series data.
Figure 11. ARIMA model-based validation within external time series data.
Designs 08 00040 g011
Figure 12. Integration of production management system (PMS) in the control room combined with SCADA monitoring of screening station.
Figure 12. Integration of production management system (PMS) in the control room combined with SCADA monitoring of screening station.
Designs 08 00040 g012
Figure 13. Production management system application digital dashboard: (a) graphical user interface—dashboard homepage; (b) graphical user interface—production forecasting page.
Figure 13. Production management system application digital dashboard: (a) graphical user interface—dashboard homepage; (b) graphical user interface—production forecasting page.
Designs 08 00040 g013
Table 1. FDI-based mine value chain parameters survey.
Table 1. FDI-based mine value chain parameters survey.
ICOMFactorsDescription
InputInformationProduct Information, Market Information, Resource Information, Production Schedule, Labor Information, Equipment Information,
OutputKey performance indicators (KPIs)Cycle Time, Lead Time, Production Output, Work-In-Process (WIP), Return-On-Capital-Employed (ROCE),
ControlWork processProduct Lifecycle Management (PLM),
MethodologyOperational Excellence (OpEx), PDCA
PeopleProcess Operators, Process Designer, Process Engineers, Process Managers
TechnologyStatistical method, stochastic method, simulation, co-simulation
MechanismTools/system functionsPLM, Computer-Integrated Manufacturing (CIM) pyramid, SCADA, OEE
Table 2. Augmented Dickey–Fuller (ADF) test metrics.
Table 2. Augmented Dickey–Fuller (ADF) test metrics.
ADF-Statistic−19.273189906
p-value0.0
Critical values1%5%10%
−3.4415777−2.8664932−2.569407
Table 3. Comparative analysis of (p, d, q) optimal combinations.
Table 3. Comparative analysis of (p, d, q) optimal combinations.
Analysis Based on Visual ObservationsAkaike Information Criterion (AIC)Bayesian Information Criterion (BIC)
(p, d, q) (1, 2, 1)(1, 2, 2)(1, 1, 1)
Related criterion value-11,436.48311,451.199
Mean absolute error (MAE)3553.323628.663547.81
Mean absolute percentage error (MAPE)0.350.590.60
Root mean squared error (RMSE)4386.694386.694383.29
Table 4. Dataset splits forming the training and testing sets.
Table 4. Dataset splits forming the training and testing sets.
Percentage of Dataset in Use per Each SplitIteration (Split #)Time Series Cross-Validator Combination
Training Set
(# of Used Raw)
Testing Set
(# of Used Raw)
29.18%1st8883
43.34%2nd17183
57.51%3rd25483
71.67%4th33783
85.84%5th42083
100%6th50383
Table 5. Dataset statistical values.
Table 5. Dataset statistical values.
Maximum ValueMinimum ValueMean ValueStandard Deviation Value
22,09222011,178.583624314.32453
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

El Bazi, N.; Laayati, O.; Darkaoui, N.; El Maghraoui, A.; Guennouni, N.; Chebak, A.; Mabrouki, M. Scalable Compositional Digital Twin-Based Monitoring System for Production Management: Design and Development in an Experimental Open-Pit Mine. Designs 2024, 8, 40. https://doi.org/10.3390/designs8030040

AMA Style

El Bazi N, Laayati O, Darkaoui N, El Maghraoui A, Guennouni N, Chebak A, Mabrouki M. Scalable Compositional Digital Twin-Based Monitoring System for Production Management: Design and Development in an Experimental Open-Pit Mine. Designs. 2024; 8(3):40. https://doi.org/10.3390/designs8030040

Chicago/Turabian Style

El Bazi, Nabil, Oussama Laayati, Nouhaila Darkaoui, Adila El Maghraoui, Nasr Guennouni, Ahmed Chebak, and Mustapha Mabrouki. 2024. "Scalable Compositional Digital Twin-Based Monitoring System for Production Management: Design and Development in an Experimental Open-Pit Mine" Designs 8, no. 3: 40. https://doi.org/10.3390/designs8030040

APA Style

El Bazi, N., Laayati, O., Darkaoui, N., El Maghraoui, A., Guennouni, N., Chebak, A., & Mabrouki, M. (2024). Scalable Compositional Digital Twin-Based Monitoring System for Production Management: Design and Development in an Experimental Open-Pit Mine. Designs, 8(3), 40. https://doi.org/10.3390/designs8030040

Article Metrics

Back to TopTop