Next Article in Journal
Digital Twin for Upstream and Downstream Integration of Virus-like Particle Manufacturing
Previous Article in Journal
Investigation of Complex ZnO-Porous Silicon Structures with Different Dimensions Obtained by Low-Temperature Synthesis
Previous Article in Special Issue
Integration of Multi-Agent Systems and Artificial Intelligence in Self-Healing Subway Power Supply Systems: Advancements in Fault Diagnosis, Isolation, and Recovery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves

by
Julliana Gonçalves Marques
1,
Felipe L. Medeiros
2,
Pedro L. F. F. de Medeiros
3,
Gustavo B. Paz Leitão
3,
Danilo C. de Souza
3,
Diego R. Cabral Silva
4,* and
Luiz Affonso Guedes
2
1
Department of Informatics and Applied Mathematics, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil
2
Department of Computer Engineering and Automation, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil
3
Instituto Metrópole Digital, Natal 59078-970, Brazil
4
School of Science and Technology, Federal University of Rio Grande do Norte, Natal 59078-970, Brazil
*
Author to whom correspondence should be addressed.
Processes 2025, 13(7), 2100; https://doi.org/10.3390/pr13072100
Submission received: 7 March 2025 / Revised: 23 May 2025 / Accepted: 17 June 2025 / Published: 2 July 2025

Abstract

Asset Management (AM) is quickly transforming due to the digital revolution induced by Industry 4.0, in which Cyber–Physical Systems (CPS) and Digital Twins (DT) are taking key positions in monitoring and optimizing physical assets. With more intelligent functionalities arising in industrial contexts, Machine Learning (ML) has transitioned from playing a supporting role to becoming a core constituent of asset operation. However, while the Asset Administration Shell (AAS) has become an industry standard format for digital asset representation, incorporating ML models into this format is a significant challenge. In this research, a control valve, a common asset in industrial equipment, is used to explore the modeling of a machine learning model as an AAS submodel, including its related elements, such as parameters, hyperparameters, and metadata, in accordance with the latest guidelines issued by the Industrial Digital Twin Association (IDTA) in early 2025. The main contribution of this work is to clarify basic machine learning principles while demonstrating their alignment with the AAS framework, hence facilitating the further development of smart and interoperable DTs in modern industrial environments.

1. Introduction

Asset Management (AM) represents a traditional, yet continuously evolving field concerned with the strategic oversight of assets throughout their life cycle. In recent years, AM has attracted increasing attention, as organizations of all sizes face growing pressure to optimize the use and performance of their assets in the face of operational, economic, and technological challenges. Due to its common usage across many industries, AM involves the structured management of different types of assets, such as financial, human, informational, and physical assets. According to the definition provided by ISO 55000 [1], AM is structured activities of an organization with the aim of deriving value from its assets. Although the range of assets addressed within AM is extensive, physical assets have been explored most extensively within research studies, as well as industry practices [2].
Advancements in sensors, actuators, and communication networks led to the emergence of Cyber–Physical Systems (CPS). First envisioned around 2006, the idea of CPS defines a new generation of systems that combine embedded computation with physical processes via tightly integrated feedback mechanisms [3]. Initial realizations of CPS were typified by close hardware–software coupling, allowing for dynamic behavior reacting to context through real-time interaction with sensors and actuators [4]. Following the innovation of digital technologies such as the Internet of Things (IoT), cloud computing, and big data, CPS have developed into Industrial Cyber–Physical Systems (ICPS) [5]. ICPS expands on the CPS paradigm through the incorporation of intelligence, connectivity, and software-enabled functionalities within industrial environments. ICPS run in a closed-looped architecture, facilitating round-the-clock data exchange, autonomous actions, and the incorporation of intelligent software to improve system evolution, resilience, and scalability [6].
The advent of Industry 4.0 (I4.0) has driven the manufacturing digitalization process by adopting technologies such as the Internet of Things (IoT), cloud computing, and artificial intelligence (AI). It offers a novel paradigm of organization centered on interoperability, decentralization, and data-driven decision-making [7]. In such a paradigm, systems are not only supposed to perform pre-programmed operations but also learn, improve, and adapt their functionality in response to the steady flow of sensor readings and environmental information. This shift has made it all the more important to simulate operational behavior, anticipate potential failure, and carry out real-time process adjustments, and all these are critical to attain dynamic, optimized, and robust industrial operation [8]. The assets have undergone significant transformation to address such requirements. They are now cyber–physical assets that transcend the old world of either purely physical or purely digital assets, and Machine learning (ML) models are at the heart of this new paradigm. These models are not just passive data analyzers; they are data-driven [9], and actively control asset action by interpreting incoming data and executing decisions that modify asset operation [10]. This establishes an active feedback cycle of data collection, analysis, and ongoing reformation, thus allowing for the development of more intelligent and autonomous systems [11,12].
Regarding the use of ML in asset management, the current environment reflects a fragmented scenario where different approaches address various challenges across the asset management workflow in ML. Some studies focus on experiment management, proposing tools that support the tracking, versioning, and comparison of assets such as datasets, hyperparameters, and model outputs to enhance reproducibility and traceability [13]. Others emphasize workflow orchestration and asset lineage, tackling issues related to feedback loops, iterative experimentation, and performance monitoring in production settings [14]. More recently, empirical investigations have highlighted real-world challenges reported by practitioners, including difficulties with collaboration, version control, and tool interoperability, especially in dynamic environments with evolving data and requirements [15]. These approaches demonstrate that asset management in ML is not a monolithic problem, but rather a collection of interconnected concerns, with each demanding tailored strategies. It is no longer sustainable to view ML models as simply black-box improvements. They must be viewed as core software assets that are intimately linked with the supported physical asset. As such, their architecture, training data, hyperparameters, and decisions must be exhaustively documented, validated, versioned, and updated in a way that is consistent with other vital assets [16].
In this context, Digital Twins (DT) provide virtual replicas of physical assets, systems, or processes that reflect their real-time status, behavior, and performance [17,18]. They are essential to managing the complexity of these new cyber–physical assets, enabling continuous data exchange between the physical and virtual worlds, which supports, among other tasks, the integration of machine learning models into asset management [19]. Building on this foundation, the Asset Administration Shell (AAS) [20] was introduced by Germany’s Platform Industry 4.0 and the Industrial Digital Twin Association (IDTA) to establish a standardized framework within the Reference Architecture Model for I4.0 (RAMI 4.0) [21], with the goal of providing a uniform digital representation of industrial assets that supports interoperability, structured data exchange, and consistent asset management across their entire lifecycle [22].
As the AAS emerges as the standardized approach to representing industrial assets in Industry 4.0, it becomes increasingly appropriate to extend this framework to encompass machine learning models as well, treating them as virtual assets that are intrinsically linked to their physical counterparts. However, recently, more specifically in February–March 2025, the IDTA released IDTA 02058-1-0 (Artificial Intelligence Dataset) [23], IDTA 02059-1-0 (Artificial Intelligence Deployment) [24], IDTA 02060-1-0 (Artificial Intelligence Model Nameplate) [25], and IDTA 02063-1-0 (Intelligent Information for Use) [26] to represent the ML model as part of a physical asset. Despite these initiatives from the IDTA and other ongoing efforts within the I4.0 ecosystem, the current Digital Twin and AAS-based literature still lacks concrete approaches to modeling ML components within the AAS framework particularly from a machine learning perspective; that is, with an understanding of what constitutes an ML model and how such a model can be derived from and connected to an asset. This work aims to demonstrate how a machine learning model can be semantically and structurally associated as a virtual asset with a physical asset, as per the AAS IDTA specifications. The paper addresses the challenge, from the ML modeling point of view, of defining the right terms and arriving at the necessary asset information to enable this representation. A generic industrial control valve use case illustrates the steps. This standardized portrayal benefits system integrators, ML practitioners, and industrial stakeholders by enabling consistent asset documentation, improving interoperability, and enabling sustainable and auditable ML deployments throughout the asset’s life cycle.

2. Conceptual Foundations

In this section, the conceptual foundations of the AAS will be presented, along with the key theoretical concepts of Neural Networks (NN), covering their components and functioning.

2.1. Asset Administration Shell

Asset management is closely linked to the concept of the DT, a virtual replica that enables the real-time monitoring, oversight, and control of its physical counterpart [27]. To be able to utilize its full potential, the DT requires a methodical and standardized digital framework—that role is played by AAS. AAS provides a semantic description of physical and logical assets, as well as their properties, behavior, and related data, enabling consistent and interoperable asset management between systems [28].

2.1.1. Asset Modeling

In order to provide assistance with the implementation of DT, the Reference Architectural Model for Industry 4.0 (RAMI 4.0) [21] proposed the AAS, based on industrial standards. Announced in 2016 as a core element of RAMI4.0, the AAS standardizes entities and data objects by means of a normalized digital representation. This not only includes production relevant assets (i.e., materials, products, devices, and machines), but also software and services with corresponding digital counterparts [29].
Externally, the AAS receives real-time data from assets and offers services to external systems via Application Programming Interfaces (APIs). Internally, the AAS integrates a manufacturer specific interface for the asset it represents, alongside a standardized interface for external communication. This dual interface structure enhances standardization by enabling consistent communication while preserving the asset’s intrinsic functionality [30].
The AAS is composed of two primary components: the information model and the communication protocol. The communication protocol determines how data is transmitted and exchanged, with various technologies being applicable as long as they allow for interoperability between different AAS implementations. The information model of the asset is defined by a metamodel [31], which comprises a header and a body, as illustrated in Figure 1.

2.1.2. AAS Metamodel

The header of an asset’s model contains essential information to represent the asset, including administrative details, the identification of the asset and its corresponding AAS, and details about the body, among other critical elements required for communication with other CPSs.
The body includes all characteristics and functions of the asset, which are illustrated through submodels that structure the operational logic of the asset. Each submodel is dedicated to a specific aspect of the asset, categorizing and structuring it based on its intended use [18]. For example, one submodel may focus on documentation, while another might address quality testing [29].

2.1.3. Entities in the AAS Metamodel

In the AAS metamodel, entities serve as abstract conceptual representations of the physical asset, created through the AAS. Various aspects of the asset are modeled as abstract classes. In practical scenarios, assets often share common characteristics beyond individual aspects, enabling these representations to be inherited from shared classes. These common classes consolidate attributes that can be utilized by multiple classes within the metamodel [20].
One of the most significant common classes is Identifiable. By inheriting from this class, instances can be uniquely and globally identified. The identification can take one of the following forms: Internationalized Resource Identifier (IRI), Uniform Resource Identifier (URI), International Registration Data Identifier (IRDI), or a customized identifier.
Another crucial class for facilitating communication between entities is ReferenceElement (Ref). This class is instrumental in establishing connections and relationships between the elements that make up the AAS representation. For instance, one submodel can reference another using a ReferenceElement, which is represented by a link that provides access to information related to the modeled asset.

2.1.4. Submodel

In the AAS metamodel, submodel templates provide a standardized and consistent structure for submodels. A submodel follows the definition specified by the SubmodelTemplate class. These submodels can be tailored to meet specific requirements, allowing for the standard model to be adapted for a particular asset without the need to create a completely new model [32]. The relevant class hierarchy related to a submodel structure is illustrated in Figure 2.
A SubmodelElement serves as an abstract superclass that encompasses all entities responsible for defining the internal structure of a submodel, such as properties, files, and operations. A DataElement is a specific type of SubmodelElement that is not composed of other SubmodelElements. Broadly, a SubmodelElement has the ability to encompass other SubmodelElements, enabling the creation of an internal hierarchy. Within this framework, the concrete class SubmodelElementCollection (SMC) plays a pivotal role, as it is defined as a set or list of SubmodelElements. The SMC is especially significant because it is the only entity that facilitates the internal structuring of a submodel, functioning similarly to a folder within a directory [33].
Certain elements within a submodel are essential for its proper description. Among these elements, Properties hold particular importance, as they serve as the primary source of information about an asset. The Property class defines attributes for storing data values and specifies the data type used to represent these properties.

2.2. Neural Networks

As machine learning concepts are being incorporated into Industry 4.0 technologies, it is essential to know the principles behind machine learning models that can be represented digitally. Feedforward Neural Networks (FNNs) are emphasized in this study, as they are the most common type of neural network structure that exists in industrial applications [34].
A neural network may be viewed as a computing system motivated by the information-processing methods of the human brain, which is meant to recognize patterns and structures in data via an example-driven learning process. It transforms inputs into outputs by modeling sophisticated dependencies, learning from interactions with data instead of being explicitly programmed [35].
The current IDTA specifications [36] only define the structures and semantic elements of classification and regression problems, thus limiting their representation to more traditional and less complex ML models. As a result, more complex models, such as deep architectures employing sequential or attention mechanisms, are not within the immediate scope of the specification at present. Despite this limitation, the use of FNNs provides a useful and very general example due to their structural similarities with a large variety of classical machine learning models in the context of supervised learning.
Classification is a task where the goal is to categorize an input into one or more predefined classes to which a sample belongs based on its features, for example, classifying an image as a “cat”, “dog”, or “fruit”. The result of this process is a prediction made by the model about the most likely class for the given input. Regression is a task where the goal is to predict a continuous numerical value; for example, predicting the price of a car based on features such as size and km/h or how many kilometers the car runs per liter [37].
A neural network is composed of different components. Thus, in Figure 3, a generic neural network is presented. In the following, the functioning of each part, along with its main components, will be briefly explained.
Regarding the data, three main concepts are crucial to understanding the network, as follows [38]:
  • Sample: a sample is a single unit of data. For example, when you are trying to categorize images of fruits, each image is a sample. Each sample contains all the information of a particular example.
  • Label: a label can be understood as a tag or an annotation that is given to data, which is the output of the model. In a classification problem, it can be considered the target of interest.
  • Feature: a feature is an identifying characteristic, typically referred to as an attribute, of a specific sample. For instance, in a photo of a dog, a feature might be the dog’s size.
  • Dimension: a dimension is the quantity of features possessed by a sample. For instance, a picture of 100 pixels in width and 100 pixels in height therefore comprises 10,000 dimensions (one corresponding value per pixel). More generally, a dimension is the total number of variables or attributes that are utilized in representing the sample.
A neural network is made up of layers, which are collections of neurons that collaborate to process information. A neuron takes in inputs, conducts computations, and sends its output to the subsequent layer, depending on the weights given to the inputs. A neural network is typically classified into three categories of layers: input, hidden, and output layers [39].
  • Input layer: Input layer: The input layer of a neural network is the first stage in the learning process, where the network is given input data and passes the data on to further layers for learning and processing.
  • Hidden layer: The hidden layers are located between the input and output layers and work behind the scenes. Although the hidden part in Figure 3 consists of only one layer of neurons to simplify the illustration, there are usually multiple layers in practice. They help the network understand the data better through multiple complex calculations and find useful patterns. This is where most of the “learning” in the network occurs.
  • Output layer: The output layer is the final layer of a neural network, which makes predictions about the data produced as a result of the knowledge acquisition that the network achieved while training. The layer output may be in the form of an output label (classification task) or a number (regression task).
The learning process takes place during training, where the model’s internal parameters are adjusted based on exposure to labeled data; this enables the model to learn patterns and relationships, making it easier to generate accurate predictions for new inputs [40]. To fully understand this process, it is important to grasp key concepts such as learning rate, epochs, batches, and the roles of training and test datasets [41].
  • Learning rate: This is the parameter that determines how fast the neural network learns while training.
  • Epoch: This refers to a single complete training cycle, where the neural network processes the training dataset once.
  • Batch: This is a small subset of data from the training set, used solely to alter the weights of the network.
  • Training set: This is the collection of data on which the neural network is trained, where the model learns to recognize patterns and relations.
  • Test set: This is the dataset that is used to verify how well the network makes predictions after it has been trained. The idea is to determine whether the model has learned adequately and can use its knowledge on new, unseen data.
  • Validation set: This is the subset of the dataset that is utilized in the measurement of a model’s performance while training.
  • Split ratio: This specifies the relative allocation of data to the training, validation, and test datasets.

3. Related Works

The AM has recently been more frequently employing Artificial Intelligence (AI), Machine Learning (ML), and Digital Twin (DT) technologies for tasks including predictive maintenance, fault detection, and decision support. Digital Twins and Asset Administration Shells (AAS) serve as frameworks that enable the generation of digital replicas from the physical asset in a standardized manner, with the potential to enhance interoperability, lifecycle management, and real-time monitoring. The literature offers a variety of approaches to the use of ML and DT/AAS, as seen in [42] where the authors present the features of Intelligent Asset Management Platforms (IAMPs) and develop a framework for their assessment based on business goals. The analytical mechanisms supported by many machine learning (ML) approaches are used across multiple analytics levels: descriptive, diagnostic, prognostic, and prescriptive. The applications of ML include anomaly detection, predictive maintenance, and Remaining Useful Life (RUL) predictions. These models are normally developed from sensor output. Digital Twin (DT) is identified as an important mechanism for the development of IAMPs, using IoT data, analytic approaches, and simulation to create a digital representation of the physical assets in real time. DTs are useful in simulating the behavior of an asset, measuring the degradation of its condition, and estimating its RUL, and therefore can inform decision-making throughout the asset’s lifecycle.
In [43], Rauh et al. outline a Digital Twin (DT) architecture based on fog computing. By harnessing the capabilities of machine learning (ML) algorithms (i.e., Random Forests, SVM) on pre-processed data from sensors in fog nodes, fog DTs can be used to establish an asset management plan to incorporate predictive maintenance. The DT models extract learned patterns that can indicate potential future defects, enabling a maintenance plan to be developed to allow for intervention before faults arise, while also providing opportunities for reconfiguration or addressing other potential issues before they occur. The fog computing architecture performs the computations of the ML models and monitors the data at the source, and thus is closer to the sensor; this allows the DT to minimize latency and has been demonstrated to improve the management of turbine media. DTs serve as digital replicas of the actual wind turbine components and systems, while fog nodes host digital twins that function at the unit level and system level, with an overarching (global) DT in the cloud. Digital twin data is continuously updated from IoT sensors (real-time data) and augmented with ML predictions, which enables PHM and fault detection in real-time to ultimately feed back mechanisms in the physical asset that can automatically trigger corrective measures.
Ref. [44] proposes a research reference model and evaluation methodology for the Equipment Asset Management (EAM) system, which is based upon the Industrial Internet (I3EAM). The I3EAM model is developed in order to enhance decision-making in ways that will support decision-making in complex systems. The model uses multiple AI/ML models, including CNN, DNN, GAN, GNN, and Transfer Learning, to assist with fault diagnosis and to predict Remaining Useful Life (RUL). The performance of different information models allows for advanced fault detection and learning from similar devices. ML is also an important method to provide descriptive, diagnostic, and predictive analytics with a big data paradigm. Furthermore, a hybrid fuzzy DEMATEL-TOPSIS method is utilized to establish a sound decision-making process while accounting for some uncertainty. The framework illustrated in this study is generated using Digital Twin technology, which is defined as a physical asset combined with a virtual asset, to provide a collective asset management capability.
In [45], Sleiti et al. propose a DT architecture for power plants and large engineering systems, with the aim of improving Reliability, Availability, and Maintainability (RAM) while minimizing costs. The architecture utilizes AI, ML, and Deep Learning (DL) methods, including in the Anomaly Detection and Deep Learning (ADL) module of the DT. Noteworthy ML models being developed as part of the DT architecture and CAD include Generalized Additive Models (GAMs) to capture the nonlinear behavior of systems, and a multiple-stage Vector Autoregressive (VAR) model to identify anomalies in gas turbines. In addition, computational intelligence employs clustering, Principal Component Analysis (PCA), neuro-computing, genetic algorithms, Bayesian networks, and fuzzy logic to scans of sensor data. Together these techniques can provide real-time anomaly detection and forecasting as well as helping to understand the system’s behavior to assist with optimization and asset management.
As discussed in the literature, DT, AAS, and ML are used in numerous applications in several different data-driven forms. However, as their implementations tend to be specific to a particular domain, there is no clarity or standards regarding their application in different implementations, as these methods were developed for to specific datasets or within specific industry contexts. Of all the reviewed studies, only [46] outlines an AAS-ML-based standardization that can standardize the integration of DT and ML for industrial asset management; the work demonstrates the practical application of the AI AAS through a use case from industry, while noting the significance of using AI and ML algorithms in the manufacturing sector. Images in real time through NN, which takes a real-time image and identifies automated guided vehicles and humans, constructing a solid pipeline that allows the model to adapt based on the changing environmental conditions, so that accuracy is kept at a high level. The presented method combines DT notions with AAS to form a standardized, machine-readable scheme. The AI AAS expands the AAS to incorporate submodels for datasets, learning algorithms, and operational conditions for the purposes of improving the lifecycle management, model traceability, and interoperability of Industry 4.0 systems.
In this regard, the role of standardization by IDTA is crucial in providing a consistent and interoperable integration of AI and DT technologies throughout the industrial ecosystem. While the approaches we reviewed exemplified the possibility of data-enabled asset management, many of them were not constructed in a uniform manner, which may limit their replication and integration across multiple systems and domains. Standardization provides a cohesive framework in support of the semantic interoperability, traceability, and modular reuse of ML components, allows for scalable and reusable solutions, and has the predictable capacity to reliably deploy DT and ML across heterogeneous industrial contexts.

4. Example of Use: Representing ML Models with AAS for an Industrial Control Valve

Control valves are essential components in industrial process systems, designed to modulate the flow of fluids such as liquids, gases, or steam within a given process. They operate by automatically adjusting their internal openings in response to control signals, allowing for the precise regulation of process variables such as flow rate, pressure, temperature, and level.

4.1. Control Valve AAS

Figure 4 presents a visual representation of the control valve in the AAS Package Explorer [47], a tool used for viewing and editing AAS models. The valve is associated with a Submodel instance named TechnicalData, which includes key properties for its operation and identification within the industrial process.
This visualization shows the hierarchical structure of the submodels, and the data elements that comprise the asset’s digital description. The valve’s AAS structure includes general information, such as the Tag (PV-710703J), display name (PV-710703J122), manufacturer (WIKA), service type (fluid mixing), plant location (UFD), and operating condition (normally closed), and specifies additional properties that represent the functional and maintainability parameters of the valve, including a flow coefficient, a nominal diameter, a position offset, and a mean time to repair (MTTR), measured in hours. The complete Control Valve AAS in JSON format can be found in [48].
The main challenge lies in maintaining these variables under operating range bounds in the presence of disturbances or variations in system conditions. Continuous processes struggle to be stable, efficient and high-quality without automatic regulation. Accordingly, an AAS-based ML model is used as an example, conducting fault prediction in industrial valves.

4.2. IDTA Specifications

Figure 5 illustrates the four submodels required to implement a machine learning (ML) model within the AAS according to the IDTA specifications [36], presented in UML class diagram format.
In addition to the AAS header and the Nameplate submodel, which provide the main information about the ControlValve, the figure also includes the AIModelNameplate, AIDataset, AIDeployment, and IntelligentInformationUse submodels. A brief explanation of the purpose and role of each submodel is provided as follows:
  • AIModelNameplate (IDTA-02060-1.0): This submodel specifies the identity of an AI model, which is the same as a nameplate, to explicitly present essential metadata. This promotes transparency and increases the model communication, as well as explaining what it can and cannot do, how it came about, and how its boundaries are part and parcel of interoperability/regulatory compliance.
  • AIDataset (IDTA-02058-1.0): The submodel presents the data that was used in the development or operation of an AI system, as well as technical details regarding the dataset, its origin, and its restrictions. It enables data provenance, ensures the re-usability of data, allows for data quality checks for training, validation, and testing; these are all crucial aspects of the documentation and governance of AI models.
  • AIDeployment (IDTA-02059-1.0): This submodel aims to determine the structure and properties of the AI model to be deployed in an industrial environment, making it more applicable for use in the real world. It supports the use of these models in real applications and in the final stage of the AI lifecycle.
  • IntelligentInformationUse (IDTA-02063-1.0): The submodel defines the framework used to deliver intelligent information for industrial assets. This comprises more than just traditional manuals, using multimedia content, operational context, and dynamic data to provide a safety-oriented experience directed to the user. By embedding contextual, personalized, and on-demand information, the goal is to improve the asset operations’ effectiveness and safety.

4.3. AAS ML Model for Control Valve Prediction

Since the main objective of this example of use is to highlight the implementation of the ML model, only the AIModelNameplate and AIDataset submodels, along with their key SMCs, will be described, focusing on the elements most relevant to machine learning.

4.3.1. AIModelNameplate Submodel

As previously introduced, the AIModelNameplate Submodel standardizes key metadata for machine learning models within the AAS. It enables integration, identification, and lifecycle management by describing the model’s structure, training, and datasets. This Submodel is essential to represent a machine learning model as a DT asset. Figure 6, Figure 7, Figure 8 and Figure 9 show the class diagram of its main SubmodelCollection (SMC) elements, filled with ControlValve data.
When discussing the AIModelNameplate submodel (Figure 6), in addition to its identification properties (URIOfTheProduct, Version, and ContactInformation, it is important to highlight its Storage and KindOfLearning properties. The Storage property specifies the location of the model file, which can be either a local path, as shown in the fictional example, or a remote server path. The KindOfLearning property describes the machine learning approach applied. In this case, since the task is fault prediction using a classification algorithm, the learning type is supervised, as classification algorithms fall under supervised learning.
Since the AAS structure was designed to represent a machine learning model, it is essential to highlight both the Input and Output SMCs (Figure 7 and Figure 8). Within the Inputs, the property KindOfInput specifies the type of input data; for this use case, it is a numeric array composed of double features, since the features are measures. The Preprocessing property refers to the preprocessing pipeline, which is provided as an external file. In this case, it is linked to a file named “model_pipeline”. If no preprocessing is applied or no file is provided, this property remains empty.
The Dimension_N property specifies the number of input dimensions (features) used by the model. In this example, four input features are defined, as follows: flow_coefficient, nominal_diameter, position_offset, and MTTR. Each of these features is further described through the Information property, indicating the specific function associated with each one. The Size property for each input dimension is fixed at 1. For the Outputs SMC, the Size property follows the same semantics as the Inputs, and is also set to 1. The Result property defines the model’s output classes: “Failure” and “No Failure”.
As briefly explained in (Section 2.2), the model training produces a result that, in general terms, is described by the Outputs. In technical terms, a model predicts the result, which can be expressed through the model accuracy, which indicates the proportion of correct predictions to the total number of predictions made. In the SEC TrainingResults, the property PredictionAccuracy represents the model’s performance; in this case, the model predicted faults with an accuracy of 92% (0.92).
The AITypeSpecification provides details about the classification algorithm used. The property Type specifies the algorithm name, which, in this case, was a Multilayer Perceptron (MLP). The Hyperparameter property lists key algorithm parameters: learning rate = 0.001; epochs = 50; and batch_size = 50. The TransferLearning SubmodelElementCollection (SMC) contains information related to transfer learning, if applied. For illustration purposes, its properties were filled with “n/a”. According to the specification, if no transfer learning data is available, the SMC is not required.
The Details section provides technical information about the implementation of the model, including its programming language and environment. The property FileExtension indicates the file format of the model archive (.pth), while AIFramework specifies the framework used in this case, “PyTorch 2.7”. The ProgramLanguage property shows that the model was implemented in Python version 3.9, and the Requirements property points to a file listing the software dependencies, here named “requirement.txt”.

4.3.2. AIDataset Submodel

As mentioned, the AIDataset submodel summarizes a dataset used in an AI system, detailing its technical characteristics, origin, and limitations. Figure 10 presents the submodel and its main SMCs.
As in the AINameplate Submodel, the main information in the AIDataset Submodel is structured as a set of properties, namely URIOfTheProduct, Version, and ContactInforma-
tion. The two most relevant SMCs in this submodel are Labeled and SizeInformation. The SizeInformation SMC provides details on the number of elements in the dataset and its subsets. The property CompleteSize refers to the total number of samples, which in this case is 12,000. The property TrainSize corresponds to the number of training samples (7200), ValSize indicates the number of samples in the validation set (1200), and TestSize refers to the number of test samples (3600). The SplitRatio property expresses the proportion used for dividing the dataset into training, “60:10:30”, meaning that 60% of the data was used for training, 10% for validation, and 30% for testing.
In the Classification SMC (Figure 11), contains metadata and information about the labeled data used in the prediction task, which, in this case, involves the application of a classification algorithm. The property NumberLabels specifies the number of labels considered in the classification problem. According to the specification, a maximum of two labels is allowed, which characterizes the task as a binary classification problem. In such cases, the model predicts between two classes, typically “yes” or “no”, or, equivalently, “1” or “0”, indicating the presence or absence of a specific condition (e.g., a fault). The property Balance indicates the proportion of positive and negative samples in the dataset. In this case, 3000 samples were labeled as fault (positive) and 9000 as no fault (negative), reflecting a class imbalance. The AnnotationFile property describes the format of the dataset file containing the labels. In this case, the dataset was stored in a CSV (Comma-Separated Values) file.
The Labels and SingleFiles are themselves submodel element collections (SMCs). The ExampleSingleFile SMC describes a single dataset element, i.e., one data sample. The Name property specifies the name of the file in which this sample is stored, and the AnnotationFile property again indicates the file format (CSV). The Labels SMC provides metadata about the classification labels. The property NumberLabels was set to 2, and the Labels property defines the names of the prediction classes: “FailurePredictionYes” and ”FailurePredictionNo”.

5. Conclusions

The increasing digitalization and intelligent automation in Industry 4.0 creates a strong requirement for solid asset management frameworks, especially for CPSs. Digital Twins function as operational representations of physical assets which need ongoing maintenance, validation, and governance. The AAS functions as a standard interface which enables both physical and digital assets to communicate with each other while providing traceability and lifecycle management capabilities. The AAS applied to software models maintains DT reliability through metadata integration and version control, provenance tracking, and performance metric management.
The essential role of Machine Learning (ML) in Industry 4.0 ecosystems has grown because organizations need to analyze large operational data sets while enabling predictive and adaptive behaviors in CPS and DTs. The integration of ML models into the Asset Administration Shell (AAS) framework faces substantial challenges. The IDTA achieved a major advancement through the publication of official specifications for ML submodels during February–March 2025, yet a critical gap persists. The instantiation of ML submodels within the AAS proves challenging because the asset information lacks clear instructions for defining AAS elements and interpreting ML-specific terminology in IDTA submodels. Standardized digital infrastructures face challenges in achieving interoperability and in the reuse of and trust in their components because of these barriers.
The purpose of this work is to present the key concepts related to ML in a clear and accessible manner, aiming to support the practical use of AAS submodels for ML within industrial contexts. To this end, we used an example of a common industrial asset of a control valve and applied the official IDTA specification to demonstrate how each submodel and subset of submodels aligns with the components of an ML model. These are representations of structural elements, parameters, and hyperparameters, which are all referred to standardized AAS properties. By basing the discussion on a practical example, this study attempts to connect the theoretical concepts of machine learning and its functional application in DT systems to further the overall goal of enhancing the transparency, interoperability, and compliance of artificial intelligence integration in I4.0 standards.

Author Contributions

Conceptualization, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; methodology, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; validation, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; formal analysis, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; investigation, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; resources, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; writing—original draft preparation, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; writing—review and editing, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; supervision, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G.; project administration, J.G.M., F.L.M., P.L.F.F.d.M., G.B.P.L., D.C.d.S., D.R.C.S. and L.A.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this work, as well as the code developed to perform the example of use, are publicly available at: https://gist.github.com/FelipeLM1/eb0a30a63f0fa3fe6553b6c2098ea076 (accessed on 4 May 2025).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
AMAsset Management
ISOInternational Organization for Standardization
CPSCyber–Physical Systems
AASAsset Administration Shell
DTDigital Twin
MLMachine Learning
RAMIReference Architectural Model for Industries
I4.0Industry 4.0
OPC-UAOpen Platform Communication–Unified Architecture
SMCSubmodel Element Collection
IDTAIndustrial Digital Twin Association
NNNeural Networks

References

  1. ISO 55000:2014; Asset Management—Overview, Principles and Terminology. Technical report; International Organization for Standardization: Geneva, Switzerland, 2014.
  2. Somia Alfatih, M.; Leong, M.S.; Hee, L.M. Definition of engineering asset management: A review. Appl. Mech. Mater. 2015, 773, 794–798. [Google Scholar] [CrossRef]
  3. Lee, E.A. Cyber-physical systems—Are computing foundations adequate. In Proceedings of the Position Paper for NSF Workshop on Cyber-Physical Systems: Research Motivation, Techniques and Roadmap, Austin, TX, USA, 16–17 October 2006; Volume 2, pp. 1–9. [Google Scholar]
  4. Lee, E.A. Cyber physical systems: Design challenges. In Proceedings of the 2008 11th IEEE International Symposium on Object and Component-Oriented Real-Time Distributed Computing (ISORC), Orlando, FL, USA, 5–7 May 2008; pp. 363–369. [Google Scholar]
  5. Biard, G.; Nour, G.A. Industry 4.0 Contribution to Asset Management in the Electrical Industry. Sustainability 2021, 13, 369. [Google Scholar] [CrossRef]
  6. Zhang, K.; Shi, Y.; Karnouskos, S.; Sauter, T.; Fang, H.; Colombo, A.W. Advancements in industrial cyber-physical systems: An overview and perspectives. IEEE Trans. Ind. Inform. 2022, 19, 716–729. [Google Scholar] [CrossRef]
  7. Tao, F.; Zhang, H.; Liu, A.; Nee, A.Y. Digital twin in industry: State-of-the-art. IEEE Trans. Ind. Inform. 2018, 15, 2405–2415. [Google Scholar] [CrossRef]
  8. Wang, S.; Wan, J.; Li, D.; Zhang, C. Implementing smart factory of industrie 4.0: An outlook. Int. J. Distrib. Sens. Netw. 2016, 12, 3159805. [Google Scholar] [CrossRef]
  9. Bousdekis, A.; Lepenioti, K.; Apostolou, D.; Mentzas, G. A review of data-driven decision-making methods for industry 4.0 maintenance applications. Electronics 2021, 10, 828. [Google Scholar] [CrossRef]
  10. Miragliotta, G.; Sianesi, A.; Convertini, E.; Distante, R. Data driven management in Industry 4.0: A method to measure Data Productivity. IFAC-PapersOnLine 2018, 51, 19–24. [Google Scholar] [CrossRef]
  11. Zhao, R.; Yan, R.; Chen, Z.; Mao, K.; Wang, P.; Gao, R.X. Deep learning and its applications to machine health monitoring. Mech. Syst. Signal Process. 2019, 115, 213–237. [Google Scholar] [CrossRef]
  12. Lee, J.; Bagheri, B.; Jin, C. Introduction to cyber manufacturing. Manuf. Lett. 2016, 8, 11–15. [Google Scholar] [CrossRef]
  13. Idowu, S.; Strüber, D.; Berger, T. Asset management in machine learning: A survey. In Proceedings of the 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), Madrid, Spain, 25–28 May 2021; pp. 51–60. [Google Scholar]
  14. Idowu, S.; Strüber, D.; Berger, T. Asset management in machine learning: State-of-research and state-of-practice. ACM Comput. Surv. 2022, 55, 1–35. [Google Scholar] [CrossRef]
  15. Zhao, Z.; Chen, Y.; Bangash, A.A.; Adams, B.; Hassan, A.E. An empirical study of challenges in machine learning asset management. Empir. Softw. Eng. 2024, 29, 98. [Google Scholar] [CrossRef]
  16. Amershi, S.; Begel, A.; Bird, C.; DeLine, R.; Gall, H.; Kamar, E.; Nagappan, N.; Nushi, B.; Zimmermann, T. Software engineering for machine learning: A case study. In Proceedings of the 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), Montreal, QC, Canada, 25–29 May 2019; pp. 291–300. [Google Scholar]
  17. Tao, F.; Xiao, B.; Qi, Q.; Cheng, J.; Ji, P. Digital twin modeling. J. Manuf. Syst. 2022, 64, 372–389. [Google Scholar] [CrossRef]
  18. Cavalieri, S.; Gambadoro, S. Digital Twin of a Water Supply System Using the Asset Administration Shell. Sensors 2024, 24, 1360. [Google Scholar] [CrossRef]
  19. Jones, D.; Snider, C.; Nassehi, A.; Yon, J.; Hicks, B. Characterising the Digital Twin: A systematic literature review. CIRP J. Manuf. Sci. Technol. 2020, 29, 36–52. [Google Scholar] [CrossRef]
  20. IDTA. Specification of the Asset Administration Shell—Part 1: Metamodel; Technical Report; Industrial Digital Twin Association: Frankfurt am Main, Germany, 2024. [Google Scholar]
  21. Resman, M.; Pipan, M.; Šimic, M.; Herakovič, N. A new architecture model for smart manufacturing: A performance analysis and comparison with the RAMI 4.0 reference model. Adv. Prod. Eng. Manag. 2019, 14, 153–165. [Google Scholar] [CrossRef]
  22. Amelete, S.; Vaillancourt, R.; Abdul-Nour, G.; Gauthier, F. Asset management, industry 4.0 and maintenance in electrical energy distribution. In Proceedings of the Advances in Production Management Systems. Artificial Intelligence for Sustainable and Resilient Production Systems: IFIP WG 5.7 International Conference, APMS 2021, Nantes, France, 5–9 September 2021; Proceedings, Part V; Springer: Berlin/Heidelberg, Germany, 2021; pp. 199–208. [Google Scholar]
  23. IDTA 02058-1-0; Artificial Intelligence Dataset. Industrial Digital Twin Association: Frankfurt am Main, Germany, 2025.
  24. DTA 02059-1-0; Artificial Intelligence Deployment. Industrial Digital Twin Association: Frankfurt am Main, Germany, 2025.
  25. Industrial Digital Twin Association. IDTA 02060-1-0 Artificial Intelligence Model Nameplate. Submodel Template of the Asset Administration Shell. 2025. Available online: https://industrialdigitaltwin.org/wp-content/uploads/2025/02/IDTA-02060-1-0_Submodel_AIModelNameplate.pdf (accessed on 4 May 2025).
  26. IDTA 02063-1-0; Intelligent Information for Use. Industrial Digital Twin Association: Frankfurt am Main, Germany, 2025.
  27. Krishnamenon, M.; Tuladhar, R.; Azghadi, M.R.; Loughran, J.G.; Pandey, G. Digital Twins and their significance in Engineering Asset Management. In Proceedings of the 2021 International Conference on Maintenance and Intelligent Asset Management (ICMIAM), Ballarat, Australia, 12–15 December 2021; pp. 1–6. [Google Scholar]
  28. Cirillo, F.; Solmaz, G.; Berz, E.L.; Bauer, M.; Cheng, B.; Kovacs, E. A standard-based open source IoT platform: FIWARE. IEEE Internet Things Mag. 2019, 2, 12–18. [Google Scholar] [CrossRef]
  29. Bader, S.R.; Maleshkova, M. The semantic asset administration shell. In Proceedings of the Semantic Systems. The Power of AI and Knowledge Graphs: 15th International Conference, SEMANTiCS 2019, Karlsruhe, Germany, 9–12 September 2019; Proceedings 15; Springer: Berlin/Heidelberg, Germany, 2019; pp. 159–174. [Google Scholar]
  30. Tantik, E.; Anderl, R. Integrated data model and structure for the asset administration shell in industrie 4.0. Procedia Cirp 2017, 60, 86–91. [Google Scholar] [CrossRef]
  31. Sakurada, L.; Leitao, P.; De la Prieta, F. Towards the digitization using asset administration shells. In Proceedings of the IECON 2021—47th Annual Conference of the IEEE Industrial Electronics Society, Toronto, ON, Canada, 13–16 October 2021; pp. 1–6. [Google Scholar]
  32. Rongen, S.; Nikolova, N.; van der Pas, M. Modelling with AAS and RDF in Industry 4.0. Comput. Ind. 2023, 148, 103910. [Google Scholar] [CrossRef]
  33. Ochoa, W.; Larrinaga, F.; Pérez, A. Architecture for managing AAS-based business processes. Procedia Comput. Sci. 2023, 217, 217–226. [Google Scholar] [CrossRef]
  34. Angelopoulos, A.; Michailidis, E.T.; Nomikos, N.; Trakadas, P.; Hatziefremidis, A.; Voliotis, S.; Zahariadis, T. Tackling faults in the industry 4.0 era—A survey of machine-learning solutions and key aspects. Sensors 2019, 20, 109. [Google Scholar] [CrossRef]
  35. Yuan, C.; Agaian, S.S. A comprehensive review of binary neural network. Artif. Intell. Rev. 2023, 56, 12949–13013. [Google Scholar] [CrossRef]
  36. IDTA. Downloads—IDTA—industrialdigitaltwin.org. 2025. Available online: https://industrialdigitaltwin.org/en/content-hub/downloads (accessed on 4 May 2025).
  37. Brereton, R.G.; Lloyd, G.R. Support vector machines for classification and regression. Analyst 2010, 135, 230–267. [Google Scholar] [CrossRef]
  38. Goodfellow, I.; Bengio, Y.; Courville, A.; Bengio, Y. Deep Learning; MIT Press: Cambridge, UK, 2016; Volume 1. [Google Scholar]
  39. Bishop, C.M. Neural Networks for Pattern Recognition; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  40. Islam, M.; Chen, G.; Jin, S. An overview of neural network. Am. J. Neural Netw. Appl. 2019, 5, 7–11. [Google Scholar] [CrossRef]
  41. Bishop, C.M.; Nasrabadi, N.M. Pattern Recognition and Machine Learning; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4. [Google Scholar]
  42. Martínez-Galán, P.; Crespo, A.; de la Fuente, A.; Guillén, A. A new model to compare intelligent asset management platforms (IAMP). IFAC-PapersOnLine 2020, 53, 13–18. [Google Scholar] [CrossRef]
  43. Teoh, Y.K.; Gill, S.S.; Parlikad, A.K. IoT and Fog-Computing-Based Predictive Maintenance Model for Effective Asset Management in Industry 4.0 Using Machine Learning. IEEE Internet Things J. 2023, 10, 2087–2094. [Google Scholar] [CrossRef]
  44. Bao, Y.; Zhang, X.; Zhou, T.; Chen, Z.; Ming, X. Application of industrial internet for equipment asset management in social digitalization platform based on system engineering using fuzzy DEMATEL-TOPSIS. Machines 2022, 10, 1137. [Google Scholar] [CrossRef]
  45. Sleiti, A.K.; Kapat, J.S.; Vesely, L. Digital twin in energy industry: Proposed robust digital twin for power plant and other complex capital-intensive large engineering systems. Energy Rep. 2022, 8, 3704–3726. [Google Scholar] [CrossRef]
  46. Rauh, L.; Reichardt, M.; Schotten, H.D. AI asset management: A case study with the asset administration shell (AAS). In Proceedings of the 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), Stuttgart, Germany, 6–9 September 2022; pp. 1–8. [Google Scholar]
  47. Open Source. GitHub-Eclipse-Aaspe/Package-Explorer: AASXPackage Explorer—github.com. 2025. Available online: https://github.com/eclipse-aaspe/package-explorer (accessed on 4 May 2025).
  48. Lima, F. Control Valve AASExample—gist.github.com. 2025. Available online: https://gist.github.com/FelipeLM1/eb0a30a63f0fa3fe6553b6c2098ea076 (accessed on 4 May 2025).
Figure 1. Structure of an AAS Metamodel. Source: Developed by the authors.
Figure 1. Structure of an AAS Metamodel. Source: Developed by the authors.
Processes 13 02100 g001
Figure 2. Class hierarchy relevant to submodel structure. The 0..* notation indicates that a class may contain zero or more components of another class. Source: Developed by the authors.
Figure 2. Class hierarchy relevant to submodel structure. The 0..* notation indicates that a class may contain zero or more components of another class. Source: Developed by the authors.
Processes 13 02100 g002
Figure 3. General architecture of a neural network, comprising: (1) the input layer, (2) the hidden layers, and (3) the output layer. Source: Developed by the authors.
Figure 3. General architecture of a neural network, comprising: (1) the input layer, (2) the hidden layers, and (3) the output layer. Source: Developed by the authors.
Processes 13 02100 g003
Figure 4. Control valve asset administration shell as visualized in the AAS Package Explorer.
Figure 4. Control valve asset administration shell as visualized in the AAS Package Explorer.
Processes 13 02100 g004
Figure 5. Machine learning AAS submodels as defined by the IDTA specifications.
Figure 5. Machine learning AAS submodels as defined by the IDTA specifications.
Processes 13 02100 g005
Figure 6. AIModelNameplate submodel for ControlValve fault prediction.
Figure 6. AIModelNameplate submodel for ControlValve fault prediction.
Processes 13 02100 g006
Figure 7. Inputs SMC ControlValve fault prediction.
Figure 7. Inputs SMC ControlValve fault prediction.
Processes 13 02100 g007
Figure 8. Outputs SMC ControlValve fault prediction.
Figure 8. Outputs SMC ControlValve fault prediction.
Processes 13 02100 g008
Figure 9. TrainingResults, AITypeSpecification and Details SMC for ControlValve fault prediction.
Figure 9. TrainingResults, AITypeSpecification and Details SMC for ControlValve fault prediction.
Processes 13 02100 g009
Figure 10. AIDataset Submodel for ControlValve Fault Prediction.
Figure 10. AIDataset Submodel for ControlValve Fault Prediction.
Processes 13 02100 g010
Figure 11. AIDataset SMC for ControlValve Fault Prediction.
Figure 11. AIDataset SMC for ControlValve Fault Prediction.
Processes 13 02100 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Marques, J.G.; Medeiros, F.L.; de Medeiros, P.L.F.F.; Leitão, G.B.P.; de Souza, D.C.; Silva, D.R.C.; Guedes, L.A. Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves. Processes 2025, 13, 2100. https://doi.org/10.3390/pr13072100

AMA Style

Marques JG, Medeiros FL, de Medeiros PLFF, Leitão GBP, de Souza DC, Silva DRC, Guedes LA. Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves. Processes. 2025; 13(7):2100. https://doi.org/10.3390/pr13072100

Chicago/Turabian Style

Marques, Julliana Gonçalves, Felipe L. Medeiros, Pedro L. F. F. de Medeiros, Gustavo B. Paz Leitão, Danilo C. de Souza, Diego R. Cabral Silva, and Luiz Affonso Guedes. 2025. "Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves" Processes 13, no. 7: 2100. https://doi.org/10.3390/pr13072100

APA Style

Marques, J. G., Medeiros, F. L., de Medeiros, P. L. F. F., Leitão, G. B. P., de Souza, D. C., Silva, D. R. C., & Guedes, L. A. (2025). Integrating Machine Learning into Asset Administration Shell: A Practical Example Using Industrial Control Valves. Processes, 13(7), 2100. https://doi.org/10.3390/pr13072100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop