Next Article in Journal
Finite Element Simulation and Piezoelectric Sensor Array-Driven Two-Stage Impact Location on Composite Structures
Next Article in Special Issue
Review of Intelligent Modeling for Sintering Process Under Variable Operating Conditions
Previous Article in Journal
Inactivation of Aspergillus Species and Degradation of Aflatoxins in Water Using Photocatalysis and Titanium Dioxide
Previous Article in Special Issue
Decision Support System (DSS) for Improving Production Ergonomics in the Construction Sector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Priority/Demand-Based Resource Management with Intelligent O-RAN for Energy-Aware Industrial Internet of Things

1
Department of Software Convergence, Soonchunhyang University, Asan 31538, Republic of Korea
2
School of Digital Technologies, American University of Phnom Penh, Phnom Penh 12106, Cambodia
3
Department of Computer Software Engineering, Soonchunhyang University, Asan 31538, Republic of Korea
*
Author to whom correspondence should be addressed.
Processes 2024, 12(12), 2674; https://doi.org/10.3390/pr12122674
Submission received: 17 October 2024 / Revised: 15 November 2024 / Accepted: 25 November 2024 / Published: 27 November 2024

Abstract

:
The last decade has witnessed the explosive growth of the internet of things (IoT), demonstrating the utilization of ubiquitous sensing and computation services. Hence, the industrial IoT (IIoT) is integrated into IoT devices. IIoT is concerned with the limitation of computation and battery life. Therefore, mobile edge computing (MEC) is a paradigm that enables the proliferation of resource computing and reduces network communication latency to realize the IIoT perspective. Furthermore, an open radio access network (O-RAN) is a new architecture that adopts a MEC server to offer a provisioning framework to address energy efficiency and reduce the congestion window of IIoT. However, dynamic resource computation and continuity of task generation by IIoT lead to challenges in management and orchestration (MANO) and energy efficiency. In this article, we aim to investigate the dynamic and priority of resource management on demand. Additionally, to minimize the long-term average delay and computation resource-intensive tasks, the Markov decision problem (MDP) is conducted to solve this problem. Hence, deep reinforcement learning (DRL) is conducted to address the optimal handling policy for MEC-enabled O-RAN architectures. In this study, MDP-assisted deep q-network-based priority/demanding resource management, namely DQG-PD, has been investigated in optimizing resource management. The DQG-PD algorithm aims to solve resource management and energy efficiency in IIoT devices, which demonstrates that exploiting the deep Q-network (DQN) jointly optimizes computation and resource utilization of energy for each service request. Hence, DQN is divided into online and target networks to better adapt to a dynamic IIoT environment. Finally, our experiment shows that our work can outperform reference schemes in terms of resources, cost, energy, reliability, and average service completion ratio.

1. Introduction

With the continued advancement of the 5th generation and the industry internet of things (IIoT), their applications have gained increasing attention. The world is expected to reach 15.9 billion in 2023 to more than 32.1 billion IoT devices in 2030 [1]. Hence, as one of the three scenarios of 5G, massive machine type of communication (mMTC) can handle a strong supply of the development of the IIoT devices [2], and efficient use of networking with IIoT could be achieved, thereby managing the resources for upcoming new service-driven industries [3] economically. It is considered one of the vital driving factors that contribute to achieving Industry 5.0 [4,5]. Hence, IIoT devices are exposed to networking and state-of-the-art features that increase the volume of highly demanding resources. On the other hand, IIoT devices (e.g., sensors and actuators) [5] are accessed and connected to network servers in the core via a fronthaul physical network [6]. However, IIoT needs a complementary relationship between network computing and communication in terms of satisfaction with quality of service (QoS) and quality of experience (QoE). IIoT may continue to generate different types of tasks, with a large increase in processing time, computation resources, battery lifetime, and efficiency [7].
The core of utility resources, leveraging mobile edge computing (MEC), is typically demonstrated to offer resource sufficiency to affordable computing tasks of IIoT devices [8]. The open radio access network (O-RAN) revolutionizes the convention monoline structure, disaggregating base stations functionalized into distinct [9,10]. O-RAN offers a new architecture and standard to support the multi-vendor forms factor that allows the use of multiple technologies integrated into 5G antennas and next-generation. Meanwhile, MEC is deployed into O-RAN [9,10,11,12], which allows the closest user experience and addresses the energy-efficient demands of modern telecommunication and network systems. O-RAN is significantly orchestrated to MEC architecture, which ensures delivery service computation and allows the deployment of network connectivity, an effective way to break bottlenecks and latency packet delivery ratios. Within MEC, illustrating resource provisioning is the crucial technique for adjusting computing, which is one of the challenges. Software-defined network (SDN) and network function virtualization (NFV) are accomplished to archive the network target by providing virtualization and softwarization [13,14,15]. When MEC-enabled SDN and NFV are the accommodation of new experiences, they enhance the potential computing service for IIoT devices, providing low-latency computation capabilities and ensuring the network is compatible with scalability and reliability [16,17]. MEC improves bandwidth utilization to enhance the user experience and overall network performance. However, tasks generated by IIoT devices incur an additional transmission to the controller that conflicts with delay (queuing) and energy consumption. Additionally, properly prioritizing and demanding resource management at the MEC server will influence task execution.
Therefore, studies have been investigating MEC in terms of managing communication and computing resource capabilities. MEC can be ostensibly affordable in terms of resource utilization and computing capabilities, which are suitable for offering computing tasks in terms of offloading decisions, resource allocation, and resource prioritization. Despite management resource flexibility and dynamic adjustments, there have been crucial impacts in increasing computation time and energy efficiency. Thereby, artificial intelligence-driven (AI-driven) IIoT devices show opportunities to customize intelligent edge computing with heterogeneous hardware, which has good energy efficiency in processing specific AI-based tasks in network performance.
In this paper, we leverage deep reinforcement learning (DRL) to enhance network performance, eliminate long-term cumulative rewards, and design an efficient, jointly optimal approach to address resource priority and demanding resource management. Hence, the system utilizes the deep q-network (DQN) algorithm. This approach tackles the challenge of managing system states and actions. The DQN selects optimal actions within a continuous space, making it perfect for this dynamic environment. The prioritized tasks by learning from a reward system are considered successful task completion and efficient resource usage. As demands change, DQN constantly re-evaluates the situation, adapting resource allocation to meet immediate needs while keeping long-term priorities. DQN bridges priority and demand management, ensuring critical tasks are completed while optimizing resource utilization for IIoT tasks. This paper’s contributions are as follows:
  • We study efficiency resource management, which enables O-RAN to provide gratitude support for multi-vendor and scalable deployment of MEC servers and improve resources based on the task demands and service priority.
  • Then, the problem of resource and energy minimization is conducted to transform into a Markov decision process (MDP). After, we design a novelty distributed DRL-driven resource management policy in the proposed model, which jointly optimal resource and priority/demand based on IIoT criteria usage.
  • Our proposed DQG-PD algorithm improves resource management efficiency and reduces task processing time and latency to enhance efficient resource awareness of IIoT applications.
  • We enhance network energy efficiency optimization based on the DQN approach. Leveraging the DQN approach, which decouples two stages (e.g., online network and target network) to respond to the network performance by stabilizing long-term learning while enabling rapid adaptation to immediate demands.
  • Lastly, we conduct experiments to evaluate and show the witness that our network scenario outperforms reference schemes.
In the rest of the paper, Section 2 gathers the previous studies and motivation to address our work. Section 3 is considered a problematic formulation for optimal resource efficiency while computing and communicating IIoT device tasks. Moreover, we consider the characteristics of different rewards and three setting network scenarios. Simulation results and further analysis are demonstrated in Section 4. Section 5 summarizes the conclusion.

2. Related Work

The industrial manufacturing system integrated with IoT, the IIoT is increasingly complex [18]. In recent years, the proper management and orchestration of various resources in IIoT devices and the optimization of network performance have become the focus of research. In fact, Figure 1 demonstrates the use of the MEC-assisted O-RAN to enhance resource computation and provide accommodations for IIoT devices. In addition, IIoT devices include more network types than IoT [19,20,21,22,23,24]. The major network types of IoT are WLAN and cellular networks, adopting Wi-Fi, Bluetooth, and 5G/B5G technologies. IIoT device networks further include low-powered wide area network (LPWAN) and WPN [25,26]. Once MEC is enabled, the SDN/NFV controller will ensure the network’s hierarchical alignment of resource dynamics and flexibility for the next generation.

2.1. Energy Efficiency for IIoT

Energy is properly utilized in processing to compute and transfer to ensure the packets of network delivery are reachable. The existing studies are conveying their approach to prove that concept and provide the technique to concisely deploy in network systems [27,28,29,30]. Energy consumption has three significant impacts: communication, computation, and sensing [31,32]. Sensing is referred to as physical hardware designed to support, which is determined for suspended functionality, and we prioritize computation and communication overhead.

2.2. Optimization Approaches Based on MEC for Virtualization in Energy Utilization

Leveraging an algorithm to enhance energy efficiency in the networking environment, which enables edge cooperation, has been investigated. In [33], the focus is on resource management in end-to-end wireless-powered MEC networks for devices. They addressed the optimization challenges posed by dynamic task arrivals and battery power fluctuations. Employing Lyapunov optimization and virtual queues transformed the long-term optimization problem into a deterministic time slot drift-plus-penalty subproblem, making it more tractable and ensuring optimal performance in the long run. Moreover, in [34], they investigated the trade-off between energy efficiency and delay in multi-user wireless power systems. By incorporating wireless energy transfer into the MEC system, MEC enhances computing capabilities to prolong device battery life. Using Lyapunov optimization theory, they optimized network energy efficiency while ensuring network stability, available communication resources, and meeting energy causality constraints. In [35], green content caching is based on user association mechanisms that aim to minimize content requests by enabling small cell networks while ensuring the QoS of mobile terminals.

2.3. DRL for MEC-Integrated O-RAN Resource Management

Both radio and computing resources play a crucial role in the performance of task offloading. Radio resources influence the data rate and energy consumption during the transmission process, while computing resources limit the computing time and energy consumption of tasks that offload to the MEC server. Hence, ref. [36] proposes an architecture utilizing xApp, powered by machine learning algorithms that quickly identify the network traffic and intelligently allocate resources within O-RAN architecture. Such a structure addresses the defects of static resource allocation in O-RAN architecture by automatically adapting PRBs to traffic load and QoS requirements, leading to better performance and more economical satisfaction for end users. On the other hand, in ref. [37], the reinforcement learning (RL) algorithm is adopted to allocate and efficiently manage PRBs by utilizing modulation and coding schemes based on traffic flow and channel quality key performance indicator (KPI) requirements. The authors of [38] investigated a mMTC scenario in 5G to troubleshoot IIoT services that ensure efficient resource allocation methods to enhance dynamic and complex environments by conducting intelligent end-to-end self-organizing resource allocation IIoT with the asynchronous actor and critic-driven DRL algorithm.
However, the above works did not consider the user offloading to the MEC server in computation by serving as energy efficient in resource utilization. In fact, this complex selection strategy cannot be ignored due to the IIoT devices transferring to the MEC server at the same time and different resource demands. Moreover, the problem of minimizing energy consumption and energy efficiency in O-RAN is notably distinct and challenging from traditional RAN. To overcome encounters related to energy efficiency for diverse IIoT resource types in the MEC server, we leveraged O-RAN-enabled MEC to ensure resource awareness and priority on demands.

3. Problem Formulation and Objectives

We first formulate the problem of resource management IIoT model and slice types of processing in the MEC server and then describe the resource management based on priority and demanding capability to ensure efficient energy consumption.

3.1. IIoT Model and Slice Types

This section uses QoS class identifiers (QCIs) to determine the characteristics and requirements of different setting types that ensure the network controller can be handled by dividing prioritized data flows based on three slice types. QCIs from the 3GPP TS 23.203 V12.2.0 [39] specification can be employed to represent pertinent example services. Table 1 outlines the QCI index, resource type, priority level, packet delay budget (PDB), and packet error loss rate (PELR) for various smart industry scenarios. Regarding resource types, guaranteed bit rate (GBR) ensures that end users receive a minimum bandwidth, even in times of network congestion. This is critical for applications requiring consistent performance. Conversely, non-GBR offers optimal service under normal conditions; however, it does not guarantee that the requested bandwidth will be available during high network usage periods. This type of service is suitable for applications where occasional delays are acceptable, such as video streaming or data uploads from environmental sensors. PDB and PELR serve as upper-bound thresholds, defining the maximum tolerable delays and packet loss rates between IIoT and the policy charging enforcement function. These parameters are crucial for maintaining the QoS in smart industrial applications, ensuring efficient energy utilization and resource demand capability in flexible and dynamic environments. Here, we dive through to describe three slice types of IIoT application use cases as follows:
  • QCI 3 ensures that the communication infrastructure supports the reliability and timely exchange of data critical for the automation of industrial processes and real-time monitoring applications. Hence, data is critical for automation in controlling the network environment’s charge policy.
  • QCI 70 ensures that mission-critical data in IIoT environments receives the highest level of service quality, characterized by ultra-reliability, low latency, high priority, enhanced security, and dedicated bandwidth.
  • QCI 82 provides the resource capabilities for defining discrete automation, which involves controlling and monitoring manufacturing processes that handle individual parts or units, generally in environments such as assembly lines or robotics, where precision and real-time performance are crucial.

3.2. Designing and Formulating Network Resource Management

Network resource management and priority use cases are addressed from the standardization framework to support the implementation of industry applications. Table 1 demonstrates the importance of setting each class’s network sensitivity in the initial usage. Thus, it gives samples for enhancing the network performance and setting network vector management.

3.3. Communication Model

We consider an offloading resource over edge computing paradigms. The MEC is deployed in O-RAN and provides capability via wireless access points in cellular networks through a wired connection. Accordingly, the clustering of the network consists of communication and computation in edge computing. Table 2 shows a system model that was used to formulate MEC resources and IIoT tasks. The number of IIoT tasks denoted as I = 1,2 , 3 , , i ,     i I participate at time slot T = 1,2 , 3 , , t ,   t T . Next, N = 1,2 , 3 , , n ,   n N , where denotes referring to the number of devices and M = 1,2 , 3 , , m ,   m M , where denotes the number of MEC servers used in our network and deployed in O-RAN. We assume that in our network scenario, in which O-RAN is deployed at the base station and coordinated, one or more O-RAN can cooperate with each other for downlink and uplink transmissions to the end devices. Furthermore, downlink transmission is ignored in our work.
Decision variables:
S i = 1 ,         O f f l o a d i n g   d e c i s i o n   o f   I I o T   t o   M E C   s e r v e r 0 ,         O t h e r w i s e
Regarding Simmons’ law, the transmission data rate D n , m of IIoT device can be calculated as:
D n , m = H n , m log 2 1 + B n G n ψ n + m = 1 N B m G m ,
where H n , m is the channel bandwidth, B n is the transmission power of n-th devices. ψ is the background interference power consumption that includes wireless transmission from other devices ψ i 0 and noise power consumption can express: ψ n = ψ i 0 + ψ i 1 . G m is the channel gains, and as in Formula (1), if multiple devices simultaneously perform calculations and offload via the wireless access channel, significant interference and a reduction in the data transmission rate will occur. Thus, the constraint conditions that the size of z n , m ( t ) should follow is:
z n , m ( t )   D n , m ( t ) , n N
The transmission energy consumed by offloading tasks for IIoT i is given as follows:
Δ T , i t = B n ( t ) z n , m ( t ) D n , m ( t )
Based on the communication model, it consists of many devices simultaneously offloading tasks to the MEC via the wireless channel, which will inevitably result in reduced data transmission rates. When the data rate for the IIoT device decreases, the backhaul link will incur higher energy consumption and longer transmission times for offloading tasks. Utilizing edge servers for computing tasks is an enhancement as it can avoid the long transmission delays associated with cloud computing.

3.4. Offloading Model

In the task offloading model, the task data shall first be transferred to MEC, which is the process of transferring computational tasks and transmission time of the task n i N . S n is the data size of computation task n-th to MEC server for computation to address the offloading time of task n-th from the IIoT devices can be expressed:
C T n , m o f f = S n D n , m
The energy consumption is generated by computation tasks. IIoT devices of CPU computation rate in the time slot F n t , and F is the number of cycles required for the CPU. C B n ( t ) is the computing task size of device n-th in time slot- t .
C B n t = F n t F

3.5. Computation Model

In this primary consideration of the computation, our network system addresses MEC computation, a computing task generated by an IIoT device, and utilizes a MEC server to tackle the computation and leverage it to offer resource capabilities. Hence, the total computation time for task n-th can be calculated based on what is processed to offload to the MEC server.
  • MEC server execution:
X m e c n , m = F n t F m
  • Total complete time:
The total time required to complete the task includes both transmission and execution time on the MEC server.
C T m e c n , m = S n t D n , m + X m e c n , m

3.6. Objective Model on Resource Management

In our proposed approach, MEC is leveraged to conduct assessments of resource management and orchestration for the priority of IIoT tasks. MEC provides resource capability and indicates stringent requirements. We aim to minimize energy consumption by leveraging MEC to provide accommodation for NFV to instantiate virtual machines (VM) and adjust and reconfigure the resource capacity of utilizations. In addition, VM minimizes resource utilization over the MEC server, ensuring the efficiency of VNF deployments and enhancing computation. Where R n denotes resource requirements of task- n , P v denotes processing power required by VNF in the MEC server and U v is the utilization of VNF- v .
min [ H n , m , S i ] m M i I n N S n i · R n C m e c + v V P v · U v C m e c
Subject to
i I n N S n i · R n + v V P v · U v < X m e c ,   m M
i I n N S n i · D n < B m ,   m M
m M S n i · C m e c n , m < L m a x ,   n N
v V P v · U v < C m a x ,   m M
E l o c a l i · L n i + m M E o f f n , m · S n i E m a x ,   i I

4. DQN-Based Priority/Demanding Resource Management

4.1. Markov Decision Process Elements

We study the MDP algorithm to solve the problem, which is a tuple of S , A , R , S , where S denotes the set of the possible states, A is denoted the action. At timeslot-t, the agent observes the state s t of the time slot and selects action. In our proposed system, we leverage deep q-network-based-priority/demanding resource management (DQG-PD). We have developed the DQG-PD to enhance the network policy in terms of resource management and efficiency. Our approach obtains the optimal network policy for efficient resource offloading in MEC servers that demonstrate resource capacity in terms of maximizing the energy efficiency in O-RAN-enabled MEC. However, to utilize the DRL approach for reinforcement learning (RL) agents, we address the steps of states-space, action-space, and rewards function as:
(1)
State-space: in each time slot, each communication link and computation in MEC observe the network state from the environment. Let S denote the state space. The current environment state includes D n , m ( t ) measurement of the data transmission rate from the IIoT device and MEC server, the status of all resources in the IIoT device S i = 1 is supposed to offload the resource to the MEC server, 0 otherwise. C B n ( t ) computation task model of n-th. As a result, state S is defined by the following parameters:
S ( t ) = D 1,1 t , D 1,2 t , D n , m t , C B 1 t , C B 2 t , C B n t , B 1,1 t , B 1,2 t ,   B n , m t , S n i ,   T
(2)
Action-space: we utilize agents to make decisions based on gathering the current state of the environment. The goal of the agent is to make the optimal decision based on maximizing the resource utilization in terms of bandwidth, computation resource utilization, and minimizing the overall average service delay with minimal task execution. Action a t A at each time step t can be defined as the action in our network system, which considers offloading the t-th task ( 1 < t N ) and allocating the resource (bandwidth and computation resource) to the task for execution on the MEC server. Action can be defined as:
a ( t ) = H n , m ,   S i , Ψ
where H n , m is a representation of the channel bandwidth,   S i selection task offloading for task size with Ψ belonging to 1,2 , , Υ MEC server when S i = 1, and   S i = 0 in local. The agent will take actions based on tasks in each time step and get the reward from the environment. Note that in each decision epoch, an action also affects the next state s in next time slot-t.
(3)
Reward: RL aims to maximize the reward from good actions. Our reward function is to design and optimize to reflect the enhancement of the priority of resource management and efficient energy. The reward function r t can be defined as:
r t = C T m e c n , m = S n t D n , m + X m e c n , m

4.2. DQN-Based Solutions

Our work aims to provide flow execution by using the DQN algorithm to handle resource demand and prioritization on the MEC server. In fact, DQN is one of the most powerful tools for assisting network controls and adaptation of resource estimates. Moreover, DQN offers two stages of architecture (e.g., target network and online network). Figure 2 indicates the network conditions, which consist of several stages of the IIoT cluster that are divided into three categories of network determination resource tasks. The proposed method is conducted using DQN with an SDN/NFV controller for interaction and abstracts the resource utilization from MEC resources in computing the IIoT tasks. A strategy chooses the action obtained through long-term optimization, and DQN maximizes the reward value through the selection of the optimal value. Controller gathers the state as in Equation (14) and selects action a ( t ) = π ( s ( t ) ) to obtain the current reward r t , while s ( t ) is transferred to the next state. π is the specific policy. The interaction with the environment proceeds based on the updated state, aiming to maximize the reward value by maintaining the ongoing process. The cumulative discounted reward over the time interval is calculated using Q ( s t , a t as in the following equation:
Q s t , a t = Ε t = 1 T γ t r t | s t , a ( t ) , γ ( 0 , 1 )
From the equation, γ ( 0,1 ) is the discount factor. If γ = 1 , the agent only considers the current rewards, whereas if γ 1 , the agent considers the later rewards.
Q s t , a t = Q s t , a t + α r t + γ m a x Q s t + 1 ,   a t + 1 Q s t , a t
When this value is used to indicate iteratively estimate the optimal Q-value for all the state and action pairs s(t), a(t), and the optimal policy π * :
π * = argmax π Q ( s t ,   a ( t ) )
α is the learning rate, while the deep neural network (DNN) technique is leveraged. DQN is constructed into two structures, with one of the DNNs being the online network, which fits the value function Q, and another DNN being the target network, which is used to gain the target Q value. and are the weight of online and target on Q-network, respectively.
Q * = r ( s t , a t + γ m a x Q ( s t + 1 , a t + 1 ) ; )
In addition, in the learning rate phase, the reward and state updates obtained from the iteration with an environment that is stored in the experience buffer in forms s t , a t , r t , s ( t + 1 ) . The parameters of DNN are updated by interactively minimizing the loss function as follows:
L O S S   F U N C T I O N = Ε ( Q * Q s t , a t ; ) 2
Algorithm 1 depicts the hierarchical structure proposed for handling the resource priority and demanding satisfaction in terms of energy efficiency in the MEC server for controlling the IIoT tasks. On the other hand, the DQG-PD algorithm is leveraged for priority and demand resource allocation in the MEC server and employs DQN to determine the optimal actions for each time slot. Initially, the algorithm sets up two Q-networks, an online network and a target network with random weights and initializes a replay memory to store previous experiences. In each episode, an initial state proceeds through multiple time steps, deciding between exploration and exploitation. During exploration, an action is selected randomly, while in exploitation, the action that maximizes the Q-value (predicted future reward) is chosen based on the online Q-network. The selected action is then executed, leading to a new state and reward stored in the replay memory. It periodically samples a mini-batch of experiences from this memory to update the online Q-network by performing gradient descent on the loss between predicted and target Q-values. The target Q-network is periodically synchronized with the online network to stabilize learning. This process is repeated for multiple episodes, allowing the MEC server to learn and refine its resource allocation strategies, ultimately returning the optimal action for each time slot based on the learned policy.
Algorithm 1: DQG-PD algorithm for priority/demand resources in the MEC server
Processes 12 02674 i001

5. Simulation and Discussions

In this study, we conducted the experiment by setting the network topology integrating the traffic between IIoT local hosts, access points, and SDN/NFV controller environment [40], instructing the policy by our approach agent decisions. In the following sub-section, we described the parameter settings and the performance of the experiment.

5.1. Parameter Settings

In this scenario, the setting needs to illustrate the three clusters of IIoT applications to conduct our experiments, as shown in Table 3. Specifically, we consider the assembly line in industrial manufacturing. The base station is assumed to be the O-RAN-enabled MEC server, which is under the umbrella of resource utilization in vertical resource alignment to the IIoT device required. MEC servers are powerful computers located close to the IIoT devices, which handle the computation tasks offloaded by the IIoT devices. The placement of four MEC servers assists in a balance between processing power and energy consumption. More servers could handle more tasks; however, the infrastructure would consume more energy. IIoT devices are in industrial setups that generate data and require computation. The numbers [50, 100, 150] indicate different scenarios with varying numbers of devices. The size of the tasks generated can range from 5 MB to 30 MB. Bandwidth is the maximum rate of data transfer across the network. A bandwidth of 20 MHz ensures that data can move quickly between the IIoT devices and MEC servers, reducing the time and energy needed for communication. CPU frequency determines how fast the MEC servers can process data. Depending on the workload, a range of 5 GHz to 20 GHz means the servers can adjust their processing speed. A maximum link latency of 1.5 milliseconds ensures that the system responds quickly, which is critical in industrial settings. The operation is divided into 1000 time slots, which are small intervals of time used to schedule tasks and allocate resources. The replay memory buffer is where data is temporarily stored while being processed. A size of 3000 units means the system can hold a significant amount of data at once, which helps manage tasks efficiently and conserve energy. ReLU is a mathematical function used in neural networks within the system to make decisions, such as task scheduling or resource allocation. A discount factor of 0.95 means the system values long-term efficiency slightly more than short-term gains, encouraging energy-efficient decisions over time. The learning rate is 0.001, ensuring the system learns gradually and avoids making large, energy-inefficient changes based on new data. The batch size of 32 is a balance between computational efficiency and energy use, allowing the system to learn effectively without overloading the servers or consuming excessive power.

5.2. Performance Evaluation

With the above setup simulation infrastructure, we deploy our management scheme and reference approaches in the controller to evaluate the performance. DQG-PD leverages the virtualization and the proposed DQN agent to guide the allocation process. Priority on industrial services and demand are observed from the infrastructure plane. The target networks discover the action batches on offloading decisions resource (bandwidth and computing resources) virtualization properties by exploration. We compared this with (1) meta-heuristic balancing the resources and demand and (2) single-agent DRL with a reward emphasized for service with higher priority (low PDB and PELR).
To evaluate the performance of our contribution domains, we set 3 slices of different priority levels with demanding conditions. The traffic rates are configured to be high congestion following four different settings to capture the bottleneck and constrained resource evaluation. For the performance metrics, we focus on (1) the reward (resources, costs, energy, and reliability) of the agent policies and (2) overall ratios based on demand and priority levels in requesting services.
We captured the reward scoring metrics of the exploited DQG-PD algorithm under different learning rates (α) and discount factors (γ). The results highlight the trade-off between the speed of learning and the quality of the solution (optimality). The combination of α = 0.001 and γ = 0.95 is chosen as the optimal setting due to its demonstrated performance in balancing the trade-off. This means that the learning rate (α), which determines how quickly the algorithm updates its knowledge, is set to a slower pace (0.01) to ensure stability. Meanwhile, the discount factor (γ), which controls how much future rewards are considered, is relatively high (0.95), emphasizing the importance of long-term rewards. These settings guide the algorithm toward optimal decision-making over time despite the initial exploration delays. Over 1000 episodes, we illustrate the comparison between the proposed DQG-PD algorithm and two other schemes (MT-HRT-PD and SADRL-PD) in terms of total reward scores, respectively. During the early phases (exploration), the DQG-PD experiences higher latencies, which is expected when the algorithm is still exploring various possibilities. However, by the 500th episode, it converges with a near-optimal solution (>80 setting metric), reducing the delay significantly and outperforming the other reference schemes, which overcomes MT-HRT-PD by 18.12 scores and SADRL-PD by 35.56 scores.
In terms of total reward, which is cumulated based on positive and negative scores as the algorithm converges during each episode, DQG-PD demonstrates superior performance. The reward is based on the algorithm’s efficiency in multi-batch processing. By the final episode, DQG-PD reached a reward of 98.76, while MT-HRT-PD and SADRL-PD achieved lower scores of 68.13 and 43.62, respectively. This highlights the algorithm’s overall better performance across different metrics, such as cost efficiency, resource usage, and latency reduction.
Each sub-reward is captured to compare different objectives to different approach performances. Figure 3 depicts the sub-reward on resources, which states the overloading queues and requests to the server. Whether the resources are overloaded or underutilized, negative rewards are accumulated. Regarding resource optimization, we illustrate how DQG-PD allocates virtual service nodes and forwards graphs to physical nodes and links efficiently. Dynamic resource allocation improves overall resource utilization, reducing bottlenecks and ensuring that the demand for created industrial service slices is met. DQG-PD achieved 26.61 positive scores in this category, outperforming MT-HRT-PD and SADRL-PD by 13.85 and 8.88 scores, respectively. The algorithm significantly enhances resource efficiency by adapting to real-time conditions and prioritizing mission-critical service demands.
In terms of costs in Figure 4, we balance between the number of servers deployed, service types, and consequences of packet loss in each slice. DQG-PD achieved positive scores of 19.27, which accounts for 19.51% of the overall reward, compared to other schemes, where MT-HRT-PD and SADRL-PD obtained 7.59 and 14.33 scores, respectively. The cost sub-reward is crucial as it focuses on reducing the financial burden of industrial service deployments, including minimizing resource provisioning costs, network bandwidth usage, and overall infrastructure expenditures. DQG-PD ensures efficient management of each prioritized slice, thereby lowering operational costs while maintaining the required QoS and QoE.
The energy sub-reward focuses on reducing the consumption amount that the service operates and network traffic experiences while traversing a complete performance. The DQG-PD algorithm optimizes the resource placement and routing decisions to reduce end-to-end latency with prioritized demand considered on energy metrics, improving overall service responsiveness and green computation. DQG-PD achieved 28.98 scores, which accounted for 29.34% of the total reward metrics, emphasizing the energy-focused and bettering performances compared to MT-HRT-PD (9.13 higher) and SADRL-PD (19.85 higher) in Figure 5.
Finally, the sub-reward on reliability metrics, as shown in Figure 6, emphasizes the algorithm’s robustness and fault tolerance. DQG-PD achieved a high-reliability score (22.68), significantly outperforming MT-HRT-PD (by 2.58) and SADRL-PD (by 4.66). This reliability is achieved by incorporating dynamic service configurations, which adapt to network failures, traffic surges, demanding congestion, or disruptions. DQG-PD ensures high availability by considering factors such as backup paths, failover mechanisms, and redundancy, thereby enhancing service reliability and minimizing downtime due to network issues.
To deeply evaluate the efficiencies in controlling different demanding factors and priority levels, we captured the completion ratios of combined service requests in different topology sizes and the number of virtual chains (3, 6, 9, and 12). First, our simulation estimated the acceptance ratios when the service is requested. After accepting the requests, we executed the services through the chain to detect any possible failure. If failure is detected, we configure the restoration properties to revive the service execution. However, the properties of slice criticality vary. If restoration is successful, we capture our results as completion ratios. Over 1000 time slots, we configured the traffic rates to stimulate the control policies. DQG-PD achieved 99.87%, which is 10.78% and 19.54% higher than MT-HRT-PD and SADRL-PD, respectively, as shown in Figure 7. The elaboration of each phase can be discussed as follows:
  • The high acceptance ratio demonstrates the controller’s scalability and ability to effectively accommodate a larger number of service requests.
  • The restoration ratio measures how well the system recovers from service failures, ensuring uninterrupted service and high availability, particularly for high incoming task requests.
  • In total, we concluded the performance into completion ratios, which demonstrates its effectiveness in completing tasks even under heavy loads.

6. Conclusions

This paper presented priority/demanding-based resource management to address optimal resource consumption and energy, which enabled O-RAN to aim at a cognitive IIoT network. The proposed scheme leveraged MEC-incorporated O-RAN to handle resource efficiency and resource management. DQG-PD leverages virtualization and DQN agents to direct resource allocation to industrial services according to priority demand. DQG-PD framework ensures sufficient system components for automated policy orchestration, including resource management, energy, cost, and service priorities. In this framework, DQG-PD enhances resource management and energy efficiency for resource-constrained IIoT devices and scales into a system for heterogeneity critical in real-time. Our results demonstrate that the DQG-PD algorithm significantly enhances computation, resource utilization, and energy efficiency when compared to existing reference schemes.
For future work, we leverage advancing federated learning to minimize the trade-off between communication overhead and joint the cross-protocol with asynchronous. Furthermore, addressing security and privacy concerns will be key focus areas in expanding the effectiveness of our approach.

Author Contributions

Conceptualization, S.R. and S.K. (Seungwoo Kang); methodology, S.R. and P.T.; software, P.T. and S.R.; validation, S.R., I.S. and G.C. formal analysis, I.S. and S.R.; investigation, S.K. (Seokhoon Kim); resources, S.K. (Seokhoon Kim); data curation, P.T. and S.R.; writing—original draft preparation, S.R.; writing—review and editing, S.R., I.S., S.K. (Seungwoo Kang) and P.T.; visualization, S.R. and G.C.; supervision, S.K. (Seokhoon Kim); project administration, S.K. (Seokhoon Kim); funding acquisition, S.K. (Seokhoon Kim). All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Institute of Information & Communications Technology Planning and Evaluation (IITP) grant funded by the Korean government (MSIT) (No. RS-2022-00167197, Development of Intelligent 5G/6G Infrastructure Technology for The Smart City); in part by the National Research Foundation of Korea (NRF), Ministry of Education, through the Basic Science Research Program under Grant NRF-2020R1I1A3066543; in part by BK21 FOUR (Fostering Outstanding Universities for Research) under Grant 5199990914048; and in part by the Soonchunhyang University Research Fund.

Data Availability Statement

Derived data supporting the findings of this study are available from the corresponding author on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Westergren, U.H.; Mähler, V.; Jadaan, T. Enabling Digital Transformation: Organizational Implementation of the Internet of Things. Inf. Manag. 2024, 61, 103996. [Google Scholar] [CrossRef]
  2. Niboucha, R.; Saad, S.B.; Ksentini, A.; Challal, Y. Zero-Touch Security Management for MMTC Network Slices: DDoS Attack Detection and Mitigation. IEEE Internet Things J. 2022, 10, 7800–7812. [Google Scholar] [CrossRef]
  3. Eloranta, V.; Turunen, T. Platforms in Service-Driven Manufacturing: Leveraging Complexity by Connecting, Sharing, and Integrating. Ind. Mark. Manag. 2016, 55, 178–186. [Google Scholar] [CrossRef]
  4. Leng, J.; Sha, W.; Wang, B.; Zheng, P.; Zhuang, C.; Liu, Q.; Wuest, T.; Mourtzis, D.; Wang, L. Industry 5.0: Prospect and Retrospect. J. Manuf. Syst. 2022, 65, 279–295. [Google Scholar] [CrossRef]
  5. Xu, X.; Lu, Y.; Vogel-Heuser, B.; Wang, L. Industry 4.0 and Industry 5.0—Inception, Conception and Perception. J. Manuf. Syst. 2021, 61, 530–535. [Google Scholar] [CrossRef]
  6. Chi, H.R.; Wu, C.K.; Huang, N.-F.; Tsang, K.-F.; Radwan, A. A Survey of Network Automation for Industrial Internet-of-Things toward Industry 5.0. IEEE Trans. Ind. Inform. 2023, 19, 2065–2077. [Google Scholar] [CrossRef]
  7. Mao, W.; Zhao, Z.; Chang, Z.; Min, G.; Gao, W. Energy-Efficient Industrial Internet of Things: Overview and Open Issues. IEEE Trans. Ind. Inform. 2021, 17, 7225–7237. [Google Scholar] [CrossRef]
  8. Mao, Y.; You, C.; Zhang, J.; Huang, K.; Letaief, K.B. A Survey on Mobile Edge Computing: The Communication Perspective. IEEE Commun. Surv. Tutor. 2017, 19, 2322–2358. [Google Scholar] [CrossRef]
  9. Polese, M.; Bonati, L.; D’Oro, S.; Basagni, S.; Melodia, T. Understanding O-RAN: Architecture, Interfaces, Algorithms, Security, and Research Challenges. IEEE Commun. Surv. Tutor. 2023, 25, 1376–1411. [Google Scholar] [CrossRef]
  10. Liang, X.; Wang, Q.; Al-Tahmeesschi, A.; Chetty, S.B.; Grace, D.; Ahmadi, H. Energy Consumption of Machine Learning Enhanced Open RAN: A Comprehensive Review. IEEE Access 2024, 12, 81889–81910. [Google Scholar] [CrossRef]
  11. Chih-Lin, I.; Kuklinski, S.; Chen, T.; Ladid, L. A Perspective of O-RAN Integration with MEC, SON, and Network Slicing in the 5G Era. IEEE Netw. 2020, 34, 3–4. [Google Scholar] [CrossRef]
  12. Lu, J.; Feng, W.; Pu, D. Resource Allocation and Offloading Decisions of D2D Collaborative UAV-Assisted MEC Systems. KSII Trans. Internet Inf. Syst. 2024, 18, 211–232. [Google Scholar]
  13. Ojaghi, B.; Adelantado, F.; Verikoukis, C. SO-RAN: Dynamic RAN Slicing via Joint Functional Splitting and MEC Placement. IEEE Trans. Veh. Technol. 2023, 72, 1925–1939. [Google Scholar] [CrossRef]
  14. Ateya, A.A.; Algarni, A.D.; Hamdi, M.; Koucheryavy, A.; Soliman, N.F. Enabling Heterogeneous IoT Networks over 5G Networks with Ultra-Dense Deployment—Using MEC/SDN. Electronics 2021, 10, 910. [Google Scholar] [CrossRef]
  15. Ros, S.; Tam, P.; Song, I.; Kang, S.; Kim, S. Handling Efficient VNF Placement with Graph-Based Reinforcement Learning for SFC Fault Tolerance. Electronics 2024, 13, 2552. [Google Scholar] [CrossRef]
  16. Shi, X.; Zhang, Z.; Cui, Z.; Cai, X. Many-Objective Joint Optimization for Dependency-Aware Task Offloading and Service Caching in Mobile Edge Computing. KSII Trans. Internet Inf. Syst. 2024, 18, 1238–1259. [Google Scholar]
  17. Yi, D.; Zhou, X.; Wen, Y.; Tan, R. Toward Efficient Compute-Intensive Job Allocation for Green Data Centers: A Deep Reinforcement Learning Approach. In Proceedings of the 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), Dallas, TX, USA, 7–10 July 2019. [Google Scholar] [CrossRef]
  18. Popli, S.; Jha, R.K.; Jain, S. A Survey on Energy Efficient Narrowband Internet of Things (NBIoT): Architecture, Application and Challenges. IEEE Access 2019, 7, 16739–16776. [Google Scholar] [CrossRef]
  19. Kim, D.-Y.; Park, J.; Kim, S. Data Transmission in Backscatter IoT Networks for Smart City Applications. J. Sens. 2022, 2022, e4973782. [Google Scholar] [CrossRef]
  20. Ren, Y.; Guo, A.; Song, C. Multi-Slice Joint Task Offloading and Resource Allocation Scheme for Massive MIMO Enabled Network. KSII Trans. Internet Inf. Syst. 2023, 17, 794–815. [Google Scholar]
  21. Ros, S.; Tam, P.; Song, I.; Kang, S.; Kim, S. A Survey on State-of-The-Art Experimental Simulations for Privacy-Preserving Federated Learning in Intelligent Networking. Electron. Res. Arch. 2024, 32, 1333–1364. [Google Scholar] [CrossRef]
  22. Nagappan, K.; Rajendran, S.; Alotaibi, Y. Trust Aware Multi-Objective Metaheuristic Optimization Based Secure Route Planning Technique for Cluster Based IIoT Environment. IEEE Access 2022, 10, 112686–112694. [Google Scholar] [CrossRef]
  23. Kang, S.; Ros, S.; Song, I.; Tam, P.; Math, S.; Kim, S. Real-Time Prediction Algorithm for Intelligent Edge Networks with Federated Learning-Based Modeling. Comput. Mater. Contin. 2023, 77, 1967–1983. [Google Scholar] [CrossRef]
  24. Mao, M.; Lee, A.; Hong, M. Deep Learning Innovations in Video Classification: A Survey on Techniques and Dataset Evaluations. Electronics 2024, 13, 2732. [Google Scholar] [CrossRef]
  25. Patel, D.; Won, M. Experimental Study on Low Power Wide Area Networks (LPWAN) for Mobile Internet of Things. In Proceedings of the 2017 IEEE 85th Vehicular Technology Conference (VTC Spring), Sydney, NSW, Australia, 4–7 June 2017. [Google Scholar] [CrossRef]
  26. Qin, Z.; Li, F.Y.; Li, G.Y.; McCann, J.A.; Ni, Q. Low-Power Wide-Area Networks for Sustainable IoT. IEEE Wirel. Commun. 2019, 26, 140–145. [Google Scholar] [CrossRef]
  27. Zhou, F.; Feng, L.; Kadoch, M.; Yu, P.; Li, W.; Wang, Z. Multiagent RL Aided Task Offloading and Resource Management in Wi-Fi 6 and 5G Coexisting Industrial Wireless Environment. IEEE Trans. Ind. Inform. 2022, 18, 2923–2933. [Google Scholar] [CrossRef]
  28. Tam, P.; Ros, S.; Song, I.; Kim, S. QoS-Driven Slicing Management for Vehicular Communications. Electronics 2024, 13, 314. [Google Scholar] [CrossRef]
  29. Hazra, A.; Adhikari, M.; Amgoth, T.; Srirama, S.N. Intelligent Service Deployment Policy for Next-Generation Industrial Edge Networks. IEEE Trans. Netw. Sci. Eng. 2021, 9, 3057–3066. [Google Scholar] [CrossRef]
  30. Zhang, J.; Wu, J.; Chen, Z.; Chen, Z.; Gan, J.; He, J.; Wang, B. Spectrum- and Energy- Efficiency Analysis under Sensing Delay Constraint for Cognitive Unmanned Aerial Vehicle Networks. KSII Trans. Internet Inf. Syst. 2022, 16, 1392–1413. [Google Scholar]
  31. Ernest, H.; Madhukumar, A.S. Computation Offloading in MEC-Enabled IoV Networks: Average Energy Efficiency Analysis and Learning-Based Maximization. IEEE Trans. Mob. Comput. 2023, 23, 6074–6087. [Google Scholar] [CrossRef]
  32. Lim, H.; Hwang, T. Energy-Efficient Beamforming and Resource Allocation for Multi-Antenna MEC Systems. IEEE Access 2022, 10, 18008–18022. [Google Scholar] [CrossRef]
  33. Sun, M.; Xu, X.; Huang, Y.; Wu, Q.; Tao, X.; Zhang, P. Resource Management for Computation Offloading in D2D-Aided Wireless Powered Mobile-Edge Computing Networks. IEEE Internet Things J. 2021, 8, 8005–8020. [Google Scholar] [CrossRef]
  34. Tong, Z.; Cai, J.; Mei, J.; Li, K.; Li, K. Dynamic Energy-Saving Offloading Strategy Guided by Lyapunov Optimization for IoT Devices. IEEE Internet Things J. 2022, 9, 19903–19915. [Google Scholar] [CrossRef]
  35. Guo, F.; Zhang, H.; Li, X.; Ji, H. Victor Joint Optimization of Caching and Association in Energy-Harvesting-Powered Small-Cell Networks. IEEE Trans. Veh. Technol. 2018, 67, 6469–6480. [Google Scholar] [CrossRef]
  36. Qazzaz, M.M.; Kułacz, Ł.; Kliks, A.; Zaidi, S.A.; Dryjanski, M.; McLernon, D. Machine Learning-Based XApp for Dynamic Resource Allocation in O-RAN Networks. arXiv 2024, arXiv:2401.07643. [Google Scholar] [CrossRef]
  37. Mungari, F. An RL Approach for Radio Resource Management in the O-RAN Architecture. In Proceedings of the 2021 18th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), Rome, Italy, 6–9 July 2021. [Google Scholar] [CrossRef]
  38. Yu, P.; Yang, M.; Xiong, A.; Ding, Y.; Li, W.; Qiu, X.; Meng, L.; Kadoch, M.; Cheriet, M. Intelligent-Driven Green Resource Allocation for Industrial Internet of Things in 5G Heterogeneous Networks. IEEE Trans. Ind. Inform. 2022, 18, 520–530. [Google Scholar] [CrossRef]
  39. 3GPP TS 23.203 V17.2.0; Technical Specification Group Services and System Aspects. Policy and Charging Control Architecture; ETSI: Valbonne, France, 2021.
  40. Tam, P.; Math, S.; Kim, S. Optimized Multi-Service Tasks Offloading for Federated Learning in Edge Virtualization. IEEE Trans. Netw. Sci. Eng. 2022, 9, 4363–4378. [Google Scholar] [CrossRef]
Figure 1. An intelligent SDN/NFV controller-based MEC server is deployed in the O-RAN.
Figure 1. An intelligent SDN/NFV controller-based MEC server is deployed in the O-RAN.
Processes 12 02674 g001
Figure 2. DQN-based MEC for priority/demanding resource utilization.
Figure 2. DQN-based MEC for priority/demanding resource utilization.
Processes 12 02674 g002
Figure 3. Sub-reward on resource.
Figure 3. Sub-reward on resource.
Processes 12 02674 g003
Figure 4. Sub-reward on cost.
Figure 4. Sub-reward on cost.
Processes 12 02674 g004
Figure 5. Sub-reward on energy.
Figure 5. Sub-reward on energy.
Processes 12 02674 g005
Figure 6. Sub-reward on reliability.
Figure 6. Sub-reward on reliability.
Processes 12 02674 g006
Figure 7. Average service complete ratios show how efficient the service percentage is over the different time slots.
Figure 7. Average service complete ratios show how efficient the service percentage is over the different time slots.
Processes 12 02674 g007
Table 1. QCI specification is defined to set different types of industry application use cases.
Table 1. QCI specification is defined to set different types of industry application use cases.
QCI-IndexResource TypesPriority LevelPDBPELRIndustry Application Use Case
QCI-3 Process Automation and MonitoringGBR3050 ms 10 3 Robotic monitoring
QCI-70 Mission Critical DataNon-GBR55200 ms 10 6 Safety systems
QCI-82 Discrete AutomationDelay critical GBR1910 ms 10 4 Automate quality control
Table 2. The notation of network system model.
Table 2. The notation of network system model.
SymbolDescription
I Set   IIoT   tasks   I = 1,2 , 3 , , i ,   i I
N Set   IIoT   devices   N = 1,2 , 3 , , n ,   n N
T Set   time   slots   T = 1,2 , 3 , , t ,   t T
M Set   MEC   server   M = 1,2 , 3 , , m ,   m M
S i Offloading decision from IIoT device to MEC, whether 1 or otherwise
S n Data   size   of   the   computation   task   n -th
D n , m Data transmission rate from IIoT n-th to MEC server m-th
B n , m Transmission power device-n to MEC server m-th
H n , m Channel bandwidth
ψ Ground interference power consumption
P v Processing power required by VNF v-th
U v Utilization of VNF v-th
B m Total bandwidth of MEC server m-th
L m a x Satisfaction of latency
C m a x Upper bound of total resource usage of the capacity of each MEC server.
X m e c n , m Execution at MEC server with task n-th
L i n Time accepted
Table 3. Simulation parameters.
Table 3. Simulation parameters.
ParametersValue
Number of MEC servers4
Number of IIoT devices[50, 100, 150]
Task size[5, 30] MB
Upper-bound bandwidth20 MHz
CPU frequency of MEC server[5, 20] GHz
Maximum link latency1.5 ms
Number of time slots1000
Replay memory buffer size3000
Activation functionReLU
Discount factor on reward0.95
Learning rate0.001
Batch size32
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ros, S.; Kang, S.; Song, I.; Cha, G.; Tam, P.; Kim, S. Priority/Demand-Based Resource Management with Intelligent O-RAN for Energy-Aware Industrial Internet of Things. Processes 2024, 12, 2674. https://doi.org/10.3390/pr12122674

AMA Style

Ros S, Kang S, Song I, Cha G, Tam P, Kim S. Priority/Demand-Based Resource Management with Intelligent O-RAN for Energy-Aware Industrial Internet of Things. Processes. 2024; 12(12):2674. https://doi.org/10.3390/pr12122674

Chicago/Turabian Style

Ros, Seyha, Seungwoo Kang, Inseok Song, Geonho Cha, Prohim Tam, and Seokhoon Kim. 2024. "Priority/Demand-Based Resource Management with Intelligent O-RAN for Energy-Aware Industrial Internet of Things" Processes 12, no. 12: 2674. https://doi.org/10.3390/pr12122674

APA Style

Ros, S., Kang, S., Song, I., Cha, G., Tam, P., & Kim, S. (2024). Priority/Demand-Based Resource Management with Intelligent O-RAN for Energy-Aware Industrial Internet of Things. Processes, 12(12), 2674. https://doi.org/10.3390/pr12122674

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop