Next Article in Journal
Analysis of Operating Regimes and THD Forecasting in Steelmaking Plant Power Systems Using Advanced Neural Architectures
Previous Article in Journal
Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms
Previous Article in Special Issue
Real-Time Task Scheduling and Resource Planning for IIoT-Based Flexible Manufacturing with Human–Machine Interaction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing

by
Zhiwen Lin
1,2,3,
Chuanhai Chen
1,2,3,
Jianzhou Chen
1,2,3 and
Zhifeng Liu
1,2,3,4,*
1
Key Laboratory of CNC Equipment Reliability, Ministry of Education, Jilin University, Changchun 130025, China
2
Jilin Provincial Key Laboratory of Advanced Manufacturing and Intelligent Technology for High-End CNC Equipment, Jilin University, Changchun 130025, China
3
School of Mechanical and Aerospace Engineering, Jilin University, Changchun 130025, China
4
Beijing Key Laboratory of Design and Intelligent Machining Technology for High Precision Machine Tools, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3691; https://doi.org/10.3390/math13223691
Submission received: 22 October 2025 / Revised: 5 November 2025 / Accepted: 14 November 2025 / Published: 18 November 2025

Abstract

The growth of intelligent manufacturing systems has led to a wealth of computation-intensive tasks with complex dependencies. These tasks require an efficient offloading architecture that balances responsiveness and energy efficiency across distributed computing resources. Existing task offloading approaches have fundamental limitations when simultaneously optimizing multiple conflicting objectives while accommodating hierarchical computing architectures and heterogeneous resource capabilities. To address these challenges, this paper presents a cloud–fog hierarchical collaborative computing (CFHCC) framework that features fog cluster mechanisms. These methods enable coordinated, multi-node parallel processing while maintaining data sensitivity constraints. The optimization of task distribution across this three-tier architecture is formulated as a multi-objective problem, minimizing both system latency and energy consumption. To solve this problem, a fractal-based multi-objective optimization algorithm is proposed to efficiently explore Pareto-optimal task allocation strategies by employing recursive space partitioning aligned with the hierarchical computing structure. Simulation experiments across varying task scales demonstrate that the proposed method achieves a 20.28% latency reduction and 3.03% energy savings compared to typical and advanced methods for large-scale task scenarios, while also exhibiting superior solution consistency and convergence. A case study on a digital twin manufacturing system validated its practical effectiveness, with CFHCC outperforming traditional cloud–edge collaborative computing by 12.02% in latency and 11.55% in energy consumption, confirming its suitability for diverse intelligent manufacturing applications.

1. Introduction

Smart manufacturing integrates cyber–physical systems, IoT, and artificial intelligence to implement unprecedented levels of automation and intelligence in industrial production [1]. These manufacturing environments deploy computer-intensive applications such as quality inspection, predictive maintenance, and digital twin modeling, which generate massive data streams that require immediate processing [2,3]. However, resource-constrained local manufacturing equipment cannot meet these exponentially growing computational demands while maintaining stringent latency requirements [4], necessitating distributed computing architectures that effectively leverage external resources while preserving time-sensitive industrial operations.
Edge computing addresses latency challenges by bringing computational resources closer to data sources, enabling anomaly detection [5], process control [6], and robot coordination [7]. These deployments commonly employ containerized microservices (e.g., Docker and Kubernetes) for flexible task execution and message queuing protocols (e.g., MQTT and OPC UA) for data exchange between shopfloor devices and edge gateways. However, resource-constrained edge infrastructure encounters performance bottlenecks when multiple production lines simultaneously offload interdependent tasks. This leads to prolonged queuing delays and elevated energy consumption due to redundant data transmissions and imbalanced workload distribution across computing layers [8]. The traditional edge–cloud architecture faces two critical challenges: escalating latency bottlenecks caused by insufficient processing capacity at edge nodes during peak production periods, and significant energy inefficiency due to suboptimal resource allocation across isolated computing tiers [9,10]. These architectural limitations become critical as interconnected production systems generate complex computational workloads, which demand sophisticated coordination and scalable processing capabilities exceeding the capacity of isolated edge nodes.
To address computational coordination challenges in smart manufacturing, Li et al. [11] proposed an edge–cloud collaborative framework enabling cloud-trained models to be deployed at the edge for real-time scheduling decisions. By contrast, Nie et al. [12] extended this approach from single-edge to multi-edge scenarios. Hong et al. [13] further introduced fog computing as an intermediate layer, constructing a cloud–fog–edge hierarchical architecture for distributed synchronized manufacturing. However, these architectures employ relatively rigid task allocation strategies where simple tasks are assigned to edge nodes while complex tasks are offloaded to the cloud. This method lacks the flexibility to dynamically coordinate resources across multiple computing tiers based on real-time production demands. For distributed edge collaboration modeling, Li et al. [14] designed a two-phase greedy algorithm for edge-layer resource scheduling with latency constraints, and Cai et al. [15] proposed a deep reinforcement learning-based approach for multitask hybrid offloading. Current mathematical models primarily focus on distributed edge collaboration, with limited consideration given to comprehensive multi-tier coordination mechanisms. This combination of inflexible architectures and incomplete modeling frameworks leads to imbalanced latency–energy trade-offs in practical deployments. Existing studies predominantly optimize latency while overlooking computational energy consumption [16,17], which increasingly contributes to the overall energy footprint of smart factories. These research gaps necessitate a more adaptive hierarchical framework with comprehensive multi-objective optimization capabilities.
To address these challenges, this study develops a hierarchical cloud–fog collaborative optimization framework that simultaneously minimizes the latency of task completion and energy consumption for dependent task offloading in smart manufacturing. Our research focuses on three key objectives. First, we design a multi-tier fog computing architecture that coordinates resource allocation across hierarchical computing layers. Second, we model task dependencies to ensure workflow integrity during cross-tier offloading. Third, we develop multi-objective optimization algorithms that efficiently explore the latency–energy trade-off space by leveraging a hierarchical structure. The framework is evaluated using two key performance metrics: (1) the latency of task completion, which indicates the complete task time end-to-end from submission to completion across all computing tiers, and (2) energy consumption, which quantifies the total energy consumed by all participating computing nodes during task execution.
The remainder of this paper is organized as follows: Section 2 reviews related works; Section 3 presents the cloud–fog architecture; Section 4 formulates the mathematical model for task offloading; Section 5 describes the offloading algorithm; Section 6 presents experimental evaluations; and Section 7 concludes the paper and discusses future directions.

2. Related Work

This section reviews three key areas of existing research: task offloading architectures, data dependency and sensitivity modeling, and multi-objective optimization algorithms for distributed computing. We analyze representative studies, compare their methodologies, and identify unresolved technical challenges, as summarized in Table 1.
At the architectural level, studies have predominantly focused on device–edge or edge–cloud frameworks [14,18], while multi-tier architectures with intermediate coordination layers remain underexplored in how they handle complex manufacturing workflows that span multiple production stages. From a modeling perspective, research has mainly concentrated on independent task formulations [19], whereas the integration of data dependencies and sensitivity constraints inherent in manufacturing processes represents an evolving area. In practical manufacturing scenarios, proprietary process parameters and real-time sensor data contain critical intellectual property and competitive advantages, making data sensitivity a paramount concern that requires strict control over data placement and movement across computing tiers [30]. Regarding optimization methodologies, there has been limited investigation into how optimization algorithms can effectively exploit the structural properties of multi-tier architectures, particularly when computing tiers expand or task characteristics become more complex. These observations indicate opportunities in integrated frameworks that combine multi-tier coordination, dependency-aware task modeling with sensitivity considerations, and optimization algorithms tailored to hierarchical computing systems.
Fog computing extends the edge computing paradigm by introducing an intermediate layer of distributed computational resources between edge devices and cloud servers [31]. Recent research has demonstrated the effectiveness of cloud–fog collaboration in addressing latency and resource constraints for IIoT applications. Studies have investigated intelligent decision-making mechanisms to dynamically determine offloading destinations between the fog and cloud based on task characteristics [32]; cost-performance trade-offs through DAG-based scheduling strategies that balance application execution time and resource expenses [33]; and hierarchical resource coordination using SDN-based architectures to handle saturated fog domains [34]. Researchers have formulated mathematical optimization models to minimize transmission delays through joint fog-to-fog and fog-to-cloud offloading [35] and employed evolutionary algorithms to select optimal computing devices for real-time tasks [36]. However, these efforts predominantly address either single-objective optimization focusing on isolated performance metrics or employ simple two-tier architectures that lack intermediate coordination mechanisms. The challenge lies in developing integrated frameworks that combine multi-tier hierarchical coordination with dependency-aware task modeling and multi-objective optimization strategies tailored to manufacturing constraints [37].
Task completion latency and energy consumption constitute critical performance metrics in smart manufacturing, necessitating simultaneous optimization of both objectives [20,27]. Multi-objective evolutionary algorithms have been applied to address this challenge across various distributed computing scenarios, including NSGA-II-based approaches for latency–energy trade-offs in edge computing [38], decomposition-based methods for resource allocation in cloud–edge environments [39], and particle swarm optimization variants for energy-efficient task scheduling [40]. However, these algorithms typically treat the solution space uniformly without exploiting the structural properties of hierarchical computing architectures. Fractal structures, characterized by self-similarity and recursive organization patterns, exhibit natural alignment with multi-tier computing hierarchies where computational tasks at different scales mirror the nested device-edge–fog–cloud structure [41]. This structural correspondence suggests that fractal-based approaches could enhance optimization efficiency by decomposing the solution space according to the inherent hierarchy of the computing architecture. However, their application to task offloading in manufacturing environments remains unexplored.
The main contributions of this paper are as follows:
(1) A cloud–fog hierarchical collaborative computing framework is proposed that organizes fog nodes into master–slave clusters to enhance horizontal scalability and resource coordination across the device, fog, and cloud layers.
(2) A comprehensive mathematical model is formulated that integrates directed acyclic graphs to represent task dependencies, incorporates dynamic factors such as real-time node load and queue depth, and enforces data-sensitivity constraints to ensure local processing of critical manufacturing data. The model features a dual objective of minimizing both end-to-end latency and overall energy consumption.
(3) The Fractal Space-Aware NSGA-II algorithm is developed by integrating fractal geometry principles with evolutionary computation. This algorithm employs fractal Brownian motion for diverse population initialization and recursive space partitioning for multi-scale searches. These methods utilize fractal self-similarity to efficiently navigate the hierarchical offloading decision space.

3. Cloud–Fog Hierarchical Collaborative Computing Architecture (CFHCC) in Manufacturing

To address the challenges of latency and energy efficiency in large-scale intelligent manufacturing, the computational resources are categorized into cloud servers, fog nodes, and fog clusters. A cloud–fog hierarchical collaborative computing framework (CFHCC) is designed for smart manufacturing environments. Table 2 summarizes all symbols involved in the architecture design and modeling section.

3.1. Hierarchical Architecture of CFHCC

The CFHCC is designed as a four-layer structure, which builds upon the established paradigm of edge–cloud computing architectures [2], while introducing novel coordination mechanisms tailored to manufacturing environments. Each layer is responsible for specific functions and is interconnected via dedicated communication protocols, enabling efficient and adaptive task offloading and resource coordination. The four hierarchical layers of CFHCC, as shown in Figure 1, are described below:
(1) Execution Terminal Layer (ETL):
This layer comprises the front-line manufacturing equipment, such as CNC machines, industrial robots, and automated guided vehicles (AGVs), each embedded with its own controllers and sensors. The ETL is responsible for real-time operational data acquisition, preliminary signal processing, and the direct execution of control commands [42]. By preprocessing data and filtering irrelevant information at the source, ETL reduces the volume of data transmitted to upper layers and minimizes communication latency.
(2) Fog Node Computing Layer (FNCL):
The FNCL consists of distributed fog nodes deployed near the shop floor, such as industrial gateways, embedded edge servers, and smart routers [43]. Each fog node independently processes time-sensitive and location-dependent tasks. The FNCL supports task offloading from the ETL, local decision-making, protocol translation, and acts as a bridge between terminal devices and higher-level computing resources.
(3) Fog Cluster Computing Layer (FCCL):
This layer aggregates multiple fog nodes into dynamically formed clusters, which are managed via MQTT protocols. The FCCL enables parallel and distributed processing of computationally intensive or collaborative manufacturing tasks that exceed the capabilities of a single-fog node. Tasks can be partitioned and scheduled across cluster members based on workload balancing, resource availability, and network topology. The FCCL provides a middle ground between localized fog computation and remote cloud services.
(4) Cloud Computing Layer (CCL):
The CCL is composed of remote high-performance servers and data centers. It provides centralized resources for large-scale optimization, global data analytics, historical data storage, and long-term planning. The cloud layer is responsible for handling tasks that require substantial computing power or global coordination, such as cross-workshop scheduling and the training of complex artificial intelligence models.
To ensure robust collaboration among these layers, CFHCC integrates multiple communication technologies: a wired Ethernet for high-speed local connections, industrial wireless networks (e.g., Wi-Fi 6 or 5G) for flexible device access, and LAN/VPN tunnels for secure inter-layer data exchange. This multi-layered architecture enables fine-grained, context-aware task allocation and dynamic resource management, effectively matching the computational requirements of diverse manufacturing scenarios with the most suitable computational resources.

3.2. Fog Computing Deployment Framework

Building upon the hierarchical structure of CFHCC, the fog computing deployment framework is designed to extend beyond traditional edge computing models by enabling collaborative processing among multiple fog nodes. The proposed framework organizes fog nodes into a coordinated network that supports the distribution of task execution and resource sharing. As illustrated in Figure 2, the FCCL comprises a fog management node, a local data center, a network switch, and several distributed fog nodes. Both the fog management node and subordinate fog nodes are equipped with computational and storage resources, as well as the ability to handle dynamic task loads. The fog management node, often known as the main fog, is responsible for global coordination within the fog layer [34]. It orchestrates resource allocation, manages task scheduling, and monitors the operational status of subordinate fog nodes (sub-fogs). This hierarchical control mechanism ensures efficient load balancing and fault tolerance within the fog cluster. High-bandwidth communication links connect terminal devices to the fog nodes, establishing a robust, low-latency network between the CFHCC and ETL. This network enables rapid data exchange and close coordination between edge devices and fog nodes. The deployment of such an interconnected fog network enables the real-time decomposition and parallel processing of computational tasks, allowing distributed edge resources to be efficiently utilized within intelligent manufacturing workshops.

4. Mathematical Model for Computational Task Offloading in CFHCC

4.1. Mathematical Problem Statement

The computational tasks of smart manufacturing are no longer presented in a simple single-threaded execution mode but exhibit characteristics of polynomiality, concurrency, and composition. Figure 3 illustrates a representative computational task from industrial condition monitoring and quality control applications [44]. As shown in the CT, the task contains seven sub-tasks, and the tasks X 2 and X 3 are executed only after task X 1 is completed. Therefore, the execution level of task X 1 belongs to the highest level, which is denoted as L 1 . The X 4 , X 5 and X 6 belong to the third execution level, and they are not executed until X 2 and X 3 are completed; similarly, X 7 is executed after all the above tasks are completed.
Within the CFHCC, the CCL is responsible for decomposing complex computational tasks into multiple sub-tasks with specific execution dependencies. These sub-tasks are assigned to appropriate computing devices across different layers based on both task characteristics and the current status of available computational resources. Upon completion, all intermediate results are returned to the CCL for final aggregation and the global delivery of results.
Given the multi-layer resource structure in CFHCC, task offloading and resource allocation can be categorized into three primary execution modes:
(1) Cloud Execution: The sub-task is executed on the CCL, leveraging centralized high-performance computing resources.
(2) Fog Node Execution: The sub-task is executed independently on a single-fog node within the FNCL, utilizing its local processing capabilities.
(3) Fog Cluster Execution: The sub-task is collaboratively executed by a dynamically formed set of fog nodes in the FCCL, enabling parallel or distributed processing to meet higher computational or reliability requirements.
For efficient task offloading and optimal resource allocation across these modes, the following four technical factors must be considered:
(1) The processing power and available resources of each computing device (cloud server, fog node, or fog cluster) at the time of task assignment.
(2) The data sensitivity constraints of the computational tasks, denoted as C o r r I S X i , which reflect whether the sub-task contains sensitive sensor or process data.
(3) The expected execution time for sub-tasks operating at the same hierarchical level, including queuing and processing delays.
(4) The time required for data transfer between layers (e.g., ETL to FNCL, FNCL to FCCL, FCCL to CCL), which is affected by network bandwidth, topology, and data volume.

4.2. Task Offloading Model-Based Cloud Server

If X i is executed in the cloud, the impact of C o r r I S X i on the time and energy costs incurred during task execution is significant. Therefore, task execution in the cloud needs to be discussed in categories. If the C o r r I S X i value of X i is 0, this means the execution process does not need to obtain terminal equipment data under real-time conditions. Consequently, the execution time of X i only comprises queue time and execution time. Conversely, the execution time adds to the time spent communicating with the industrial site. The above can be expressed as follows:
T t a s k c l o u d ( X i ) = T c o m ( c l o u d , t e r m i n a l ) + T p r o c e s s c l o u d X i + T q u e , C o r r I S X i = 1 T p r o c e s s c l o u d X i + T q u e ,     C o r r I S X i = 0
T c o m c l o u d , t e r m i n a l = B X i b w c e
T p r o c e s s c l o u d X i = C I X i V p r o c e s s c l o u d
where T c o m c l o u d , t e r m i n a l represents the time it takes for the cloud server to communicate with terminal equipment; T p r o c e s s c l o u d X i is the time it takes for the cloud server to process X i . b w c e represents the execution of X i , which requires the industrial site to provide a B X i bits digital stream and the average network bandwidth for the industrial site. C I is the instruction processed by the execution of X i . V p r o c e s s c l o u d is the data processing speed of the cloud server.
T q u e is the time it takes for X i to wait for the predecessor tasks, X p r e , to finish, i.e., the time it takes for the latest task in the previous execution level to finish, which is calculated as follows:
T q u e X i = i = 1 L L p r e M ax X j X p r e T t a s k X j
The time and cost of CCL are determined as follows:
T t a s k c l o u d ( X i ) = B X i b w c e + C I V p r o c e s s c l o u d + i = 1 L L p r e M ax X j X p r e T t a s k X j , C o r r I S X i = 1 C I V p r o c e s s c l o u d + i = 1 L L p r e M ax X j X p r e T t a s k X j ,     C o r r I S X i = 0
The energy cost of X i to be executed by CCL as follows:
E t a s k c l o u d ( X i ) = E c o m C l o u d , T e r m i n a l + E p r o c e s s c l o u d X i , C o r r I S X i = 1 E p r o c e s s c l o u d X i , C o r r I S X i = 0
The execution energy cost of X i is only generated in the processing task when industrial site information does not need to be obtained under real-time conditions. Conversely, the energy cost of X i consists of the processing energy cost and the communication energy cost of the industrial site.

4.3. Task Offloading Model Based on a Single-Fog Node

Tasks that are not very computationally intensive and need to be “tightly linked” to the industrial site can be assigned to a single-fog node for execution. Since the edge-side devices are connected to each other via fiber or Ethernet, their communication time is negligible. Therefore, the execution time can be expressed as follows:
T t a s k f o g ( X i ) = T s e n d c f X i + T q u e X i + T p r o c e s s f o g X i + T r e c e f c X i
where T s e n d c f X i , T q u e X i , T p r o c e s s f o g X i , and T r e c e f c X i denote the sending time, queuing time, processing time and receiving time of X i , respectively. In addition, if the source and result data sizes of X i are supposed to be D r o u g h X i and D r e s u l t X i , and the transmission rate from CCL to FCL is v t r a n s , T s e n d c f X i and T r e c e f c X i can be expressed as follows:
T s e n d c f X i = D r o u g h X i v t r a n s c f
T r e c e f c X i = D r e s u l t X i v t r a n s f c
Assuming that the processing execution speed of the fog server is V p r o c e s s f o g , T p r o c e s s f o g X i can be expressed as follows:
T p r o c e s s f o g X i = C I X i V p r o c e s s f o g
Then, the execution time for X i via FNCL is the following:
T t a s k f o g ( X i ) = D r o u g h X i + D r e s u l t X i v t r a n s c f + i = 1 L L p r e M ax X j X p r e T t a s k X j + C I X i V p r o c e s s f o g
The energy cost of FNCL consists of three components: the energy cost of task source data transmission, processing, and the transmission of task results.
E t a s k f o g ( X i ) = E t r a n s c f X i + E p r o c e s s f o g X i + E t r a n s f c X i

4.4. Task-Offloading Model Based on a Fog Cluster

Once the FCCL is selected for task execution, the workflow follows a five-phase collaborative process among the cloud, main fog node, and sub-fog nodes. First, the CCL offloads task X i to a designated main fog node. Second, the main fog node divides X i into N sub-tasks for parallel processing. Third, these sub-tasks are distributed to available fog nodes within the cluster. Fourth, after parallel execution, all intermediate results are collected back to the main fog node. Finally, the main fog node merges these results and returns the final output to the CCL.
Assuming that X i is divided into N sub-tasks, the set of sub-tasks can be expressed as X i = s x 1 , s x 2 , , s x n . We can assume that f o g C = f c m a i n , f c 1 , f c 2 , , f c n is the set of co-processing X i , where f c m a i n is the main fog node mentioned above and is responsible for the distribution of s x i . The data transmission time of s x i is as follows:
T t r a n s f c m a i n , f c i = D r o u g h s x i + D r e s u l t s x i v t r a n s f f
where v t r a n s f f is the average data transmission speed between fog nodes. The processing time of s x i is as follows:
T s u b t a s k s x i = D r o u g h s x i + D r e s u l t s x i v t r a n s f f + C I s x i V p r o c e s s f o g N
Recall that the main fog node f c m a i n is responsible for dividing sub-tasks and merging the results. Therefore, the running time of f c m a i n is the following:
T m a i n f o g X i = T d i v i d e X i + M a x T s u b t a s k s x i + T m e r g e X i
where T d i v i d e X i and T m e r g e X i are the allocation time and the time it takes for results to merge for X i on f c m a i n . Therefore, the execution time for X i via FCCL is as follows:
T t a s k F o g C ( X i ) = T s e n d c f X i + T q u e X i + T m a i n f o g X i + T r e c e f c X i
Specially, T d i v i d e X i , T m e r g e X i T s u b t a s k s x i ; therefore, the T t a s k f o g C X i is as follows:
T t a s k F o g C ( X i ) = D r o u g h X i + D r e s u l t X i v t r a n s c f + i = 1 L L p r e M ax X j X p r e T t a s k X j + M a x T s u b t a s k s x i
The energy cost of X i to be determined via FCCL and consists of four components: the energy cost of transmission from CCL to the main fog node, data transfer between fog clusters, processing, and the uploading of results.
E t a s k F o g C ( X i ) = E t r a n s c f X i + i = 1 F o g C E c o m f f s x i + E p r o c e s s F o g C X i + E t r a n s f c X i

4.5. Multi-Objective Combinations of Optimization Problems

In this section, the CFCCA is formulated as a multi-objective stochastic optimization problem, aiming to balance the trade-off between task execution time and energy consumption.
Objective 1:
f 1 = m i n T t a s k c l o u d X N , T t a s k f o g X N , T t a s k F o g C X N
Objective 2:
f 2 = m i n i = 1 N E t a s k c l o u d X i , E t a s k f o g X i , E t a s k F o g C X i
where N denotes the last task for the computational application. Predictably, balancing execution time and energy consumption requires a minimal completion time for the final task while considering the cumulative energy cost across all sub-tasks. The optimization problem must satisfy the following constraints to ensure feasible and practical task offloading decisions:
(1) Each computational task must be assigned to exactly one execution mode (the cloud, single-fog node, or fog cluster) to avoid conflicts and ensure deterministic execution:
δ i c l o u d + δ i f o g + δ i f o g C = 1 ,   i 1 , 2 , , N δ i c l o u d , δ i f o g , δ i f o g C 0 , 1 ,   i 1 , 2 , , N
(2) To meet real-time requirements and guarantee quality of service (QoS) in smart manufacturing, each task’s execution time must not exceed its deadline:
T t a s k X i T max X i , i 1 , 2 , , N
(3) When tasks are offloaded to the fog cluster for parallel processing, the division granularity should be limited by the available fog node resources:
n i f o g C ,                   i 1 , 2 , , N n i 1 ,   n i Ζ + , i 1 , 2 , , N
(4) Tasks with high data sensitivity must be processed locally to comply with privacy and security requirements in industrial environments:
δ i c l o u d = 0 , i 1 , 2 , , N   where   Corr I S X i = 1

5. Algorithm Design for Offloading Decisions

To address the challenges of latency minimization and energy efficiency in large-scale intelligent manufacturing task offloading, an advanced optimization algorithm tailored for the CFHCC is proposed. Building upon the principle of multi-objective evolutionary optimization, the Fractal Space-Aware Non-dominated Sorting Genetic Algorithm II (FS-NSGA-II) is developed, which fully leverages hierarchical fractal space partitioning to enhance the global search and local exploitation capabilities for offloading decision optimization. The framework consists of two key modules: initialization based on fractal Brownian motion and NSGA-II modification based on fractal space partitioning. The mathematical models and implementation procedures for each module are detailed below.

5.1. Population Initialization Based on Fractal Brownian Motion

In the classical NSGA-II, the initial population P 0 of size N p o p is generated through uniform random sampling:
X k ( 0 ) ( i ) = u min + ( u max u min ) rand ( 0 , 1 ) , i { 1 , 2 , , N }
where X k 0 represents the offloading decision for task X i in the k-th individual, and u m i n and u m a x are the lower and upper bounds of node indices. However, this random initialization ignores task temporal constraints and fails to provide multi-scale diversity.
In FS-NSGA-II, for a given task set C T = X 1 , X 2 , , X N , each task X i can be offloaded to a set of available nodes U = C e F e F C , where C e denotes the cloud nodes, F e denotes the individual fog nodes, and F C denotes the fog cluster nodes. The latest completion time C T M A X i for each task X i is calculated as follows:
C T M A ( X i ) = T M A r e s p ,   X i = X N M i n M i n X j X p o s t C T M A ( X j ) M i n u S 1 S 2 S 3 T e x u X j , T X i r e s p , X i X N
where T M A r e s p denotes the response deadline of the overall task; X p o s t is the set of successive tasks; and T e x u X j is the execution delay of task X j on node u . Accordingly, the latest allowable start time for each task is defined as follows:
L A T M A ( X i ) = C T M A ( X i ) M i n u C F F C T e x u X i
A smaller L A T M A X i indicates stronger temporal constraints on the scheduling of task X i . During initialization, tasks are sorted and distributed according to L A T M A X i . To enhance initialization diversity, fractal Brownian motion (FBM) is introduced:
F B M H t = 1 Γ ( H + 1 2 ) 0 t ( t s ) H 1 2 d B ( s )
where H 0,1 is the Hurst exponent; B s is the standard Brownian motion; and Γ · denotes the Gamma function. F B M H t is normalized to the 0,1 interval and mapped to the offloading decision space. Thus, the initial solution of the k -th individual is presented by the following:
X k , i ( 0 ) = u min + ( u max u min ) F B M H i + ξ k N F B M H m i n F B M H m a x F B M H m i n
where ξ k is the individual offset of perturbation, and u m i n and u m a x are the lower and upper bounds of node indices, respectively.
By prioritizing the allocation of time-sensitive tasks and then applying fractal Brownian motion, initialization ensures that the population both adheres to temporal requirements and achieves broad, multi-scale coverage of the solution space.

5.2. NSGA-II Improvement Based on Fractal Space Partitioning

The classical NSGA-II employs a well-established multi-objective optimization framework consisting of (1) non-dominated sorting to classify individuals into Pareto fronts F j based on dominance relationships, where an individual X a dominates X b if F j X a F j X b for all objectives and strict inequality holds for at least one objective; (2) a crowding distance calculation to maintain solution diversity within the same front; (3) tournament selection based on Pareto rank and crowding distance; (4) simulated binary crossover (SBX) and polynomial mutation; and (5) an elitism strategy that merges parent and offspring populations and retains the best N p o p individuals.
However, the uniform crossover and mutation operators in the classical NSGA-II ignore the hierarchical and heterogeneous structure inherent in CFHCC environments. The CFHCC exhibits a multi-layered resource architecture where computational nodes are organized as cloud servers, fog nodes, and fog clusters. When mapping diverse tasks to these resources, the resulting solution space displays recursive clustering and local self-similarity, as groups of high-quality solutions often emerge around certain resource combinations and reappear at different levels of granularity. This phenomenon closely corresponds to fractal theory, which describes complex systems characterized by self-similar patterns across various scales.
Based on this structural alignment, the FS-NSGA-II enhances the classical NSGA-II framework according to three main aspects while preserving its core mechanisms (non-dominated sorting, crowding distance, and elitism): (1) dynamic fractal partitioning of the solution space to mirror the hierarchical and clustered nature of CFHCC resources; (2) multi-scale genetic operators that leverage the nested structure of promising regions; and (3) adaptive, self-similar search mechanisms that facilitate both broad exploration and focused exploitation.

5.2.1. Fractal Space Partitioning

Based on the physical resource heterogeneity of CFHCC, the set solution space S is divided as follows:
S = S c l o u d S f o g S f o g C
Each subspace, S , is further recursively divided into several self-similar subspaces based on the actual task allocation and resource characteristics after every 10 iterations:
S k ( σ + 1 ) = j = 1 m σ F H S k , j ( σ ) , k { cloud , fog , fogC }
where F H is the fractal partition operator and m σ is the number of subspaces at level σ .

5.2.2. Fractal Multi-Scale Crossover Operator

The fractal multi-scale crossover operator replaces the SBX operator in the classical NSGA-II and adaptively adjusts the crossover probability at different fractal levels to realize alternating coarse and fine searches. For the i-th gene of two parents X a and X b with fractal levels σ a and σ b respectively, the crossover operator is defined as follows:
X o f f s p r i n g ( c ) ( i ) = X ( a ) ( i ) , if   r i < p σ c r o s s X ( b ) ( i ) , otherwise
where p σ c r o s s is the crossover probability at the fractal levels σ = m i n σ a , σ b and r i ~ U 0,1 . At lower levels (coarse scales), p σ is larger to promote global recombination; at higher levels (fine scales), p σ is smaller to reinforce local inheritance.

5.2.3. Fractal Adaptive Mutation Operator

The fractal adaptive mutation operator replaces the polynomial mutation in the classical NSGA-II to enabling the performance of multi-scale perturbations. For the i-th gene of individual X c at fractal level σ , the mutation operator is as follows:
X m u t a t e d ( c ) ( i ) = X ( c ) ( i ) + α σ H σ N ( 0 , 1 ) , if   p < q σ n e a r u min + ( u max u min ) rand ( 0 , 1 ) , if   q σ n e a r p < q σ n e a r + q σ f a r X ( c ) ( i ) + p g a u s s σ N ( 0 , σ 2 ) , otherwise
where p ~ U 0,1 , α σ is the adaptive step size for fractal level σ ; H σ is the fractal index for this level; q σ n e a r and q σ f a r are the probabilities for intra-/inter-level perturbations; and p g a s s σ controls the proportion of Gaussian fractal micro-perturbations.
This mechanism ensures that a (1) large-scale jump search is conducted at coarse levels to escape local optima, and (2) fine-tuning at refined levels is performed to improve the balance and convergence of the solution set.

5.3. Flow and Pseudocode of FS-NSGA-II

Figure 4 illustrates the proposed FS-NSGA-II framework. The algorithm begins with population initialization based on fractal Brownian motion, ensuring diverse and multi-scale coverage of the solution space. The solution space is then recursively partitioned according to fractal principles, and the search granularity is dynamically adapted based on the distribution of solutions. Evolutionary operators such as crossover and mutation are controlled by the hierarchical structure of the partitions, balancing local refinement and global exploration. This self-adaptive, multi-scale strategy improves both convergence and diversity for large-scale heterogeneous task offloading. The detailed algorithmic steps are presented in Algorithm 1.
Algorithm 1: FS-NSGA-II
Input: Maximum iterations G m a x , population size N p o p , fractal parameter H , fractal partition threshold ρ t h , evolutionary parameters α σ , p σ c r o s s , q σ n e a r , q σ f a r , p g a u s s σ .
Output: Pareto optimal solution set P * .
// Population Initialization using FBM
for  k = 1  to  N p o p  do
for  i = 1  to  N  do
Calculate L A T M A X i according to Equation (27);
Generate normalized F B M H i N + ξ k according to Equation (28);
Set X k ( 0 ) i according to Equation (29);
Set P 0 = X 1 ( 0 ) , , X p o p ( 0 ) ;
// Main Evolutionary Loop
for  t = 0  to  G m a x 1  do
// Fractal Space Partitioning (every 10 iterations)
if  t mod 10 = 0  then
Recursively apply Equations (30) and (31) to obtain fractal subspaces at each level;
// Fitness Evaluation
Compute fitness value F X for all individuals in P t ;
// Non-dominated Sorting and Crowding Distance
Assign individuals to Pareto fronts F 1 , F 2 , ;
Compute crowding distance d i for each individual;
// Selection
Select parents using tournament selection based on Pareto rank and crowding distance;
// Crossover and Mutation
for each parent pair X ( a ) , X ( b )  do
Determine fractal levels σ a , σ b ;
Apply fractal crossover according to Equation (32) to generate offspring;
Apply fractal mutation according to Equation (33) to mutate offspring;
// Offspring Evaluation
Compute fitness value F X for all individuals in Q t ;
// Elitism
Merge P t Q t , perform non-dominated sorting;
Select top N p o p individuals based on Pareto rank and crowding distance to form P t + 1
// Termination Check
if termination condition is met then
Set P * as the current Pareto front F 1 ;
return  P * ;
To demonstrate the optimization process, we present a representative calculation for a sample task with five sub-tasks of various data sizes ([1.2, 0.8, 1.5, 1.0, 0.9] MB) and instruction counts ([500, 800, 1200, 600, 400] M). Table 3 shows the evolution of a selected population individual across three generations, illustrating how FS-NSGA-II progressively refines offloading decisions. In the initial generation (Gen 1), a randomly generated solution x   =   [ 1,2 , 1,2 , 1 ] (where 1 denotes a single-fog node and 2 denotes a fog cluster) yields a completion time of 696 ms and energy consumption of 65.94 J. Through fractal-based crossover and Gaussian mutation, the algorithm explores alternative configurations. By generation 50, an improved solution x   =   [ 2,1 , 2,1 , 2 ] reduces latency to 673 ms while maintaining energy at 67.21 J. After 200 generations, the algorithm converges to a Pareto-optimal solution x   =   [ 2,2 , 1,2 , 1 ] achieving 658 ms and 63.85 J, demonstrating simultaneous improvements in both objectives through intelligent search space exploration. The calculation details for objective evaluation—including transmission time, computation time, and energy consumption—follow the mathematical formulations defined in Section 4.

6. Experiments and Analysis

To comprehensively evaluate the effectiveness and applicability of the proposed approach, two types of experiments were conducted: simulation experiments and a practical case study. The simulation experiments were designed to benchmark the algorithm’s performance under various scenarios, and the case study was based on a digital twin-enabled smart manufacturing workshop, aiming to demonstrate the method’s practical value in real-world applications.

6.1. Simulation Experiment

The simulation experiments were carried out in a Python-based environment to assess the optimization performance of the proposed method under different configurations. The experiments were executed on a workstation equipped with an Intel Core i7-12700 CPU and 16 GB of RAM, running Windows 11 and Python 3.10. All algorithms and simulation models were implemented using standard Python libraries and customized modules. The customized modules included a task generator module that allowed users to configure task arrival patterns, dependency structures, and computational characteristics; a resource simulator module that modeled the computing capabilities and network conditions of cloud, fog nodes, and fog clusters; and an offloading decision executor module that implemented the proposed FS-NSGA-II algorithm and baseline methods. Users could modify module parameters through configuration files (in JSON format) to adapt the simulation to different manufacturing scenarios without altering the core code.
The simulation scene parameters were derived from the practical configurations of intelligent manufacturing environments, ensuring that the experimental setup closely reflected real-world industrial applications. In particular, the performance specifications of the computing nodes were aligned with those of commercially available products, providing a realistic basis for evaluating system behavior. The detailed parameter configuration for the cloud–fog hybrid computing scenario is summarized in Table 4, including network topology, computational capabilities, task characteristics, and other essential features to guarantee the representativeness and validity of the simulation environment.
The key parameters of the FS-NSGA-II algorithm are summarized in Table 5. The parameter configuration is grounded in evolutionary algorithm theory and established principles of fractal geometry. The evolutionary parameters ( G m a x = 200 , N p o p = 20 ) adhere to standard multi-objective optimization guidelines, which balance population diversity and computational efficiency. The fractal-specific parameters are initialized based on theoretical foundations: H   =   3 provides optimal recursive partitioning depth for solution spaces with 3–8 decision variables; ρ t h = 0.15 follows the adaptive decomposition criterion that subspace variance should reduce to 10–20% before further subdivision; and the crossover/mutation probabilities implement the classical 70–30 exploitation–exploration balance with Gaussian perturbations constrained to ±0.6 standard deviations. These theoretically motivated initial values are subsequently validated and refined through the sensitivity analysis presented below.
A sensitivity analysis was conducted to evaluate the effect that the core parameters of FS-NSGA-II have on optimization performance. The investigation focused on the fractal partition threshold ( ρ t h ), the Gaussian perturbation coefficient ( α σ ), and the fractal crossover probability ( p σ c r o s s ), as these parameters fundamentally influence the algorithm’s search dynamics and solution diversity. The results of the sensitivity test are shown in Figure 5.
(1) The fractal partition threshold ( ρ t h ) controls the granularity of recursive space division: a lower value enables finer partitioning, which enhances local exploitation but may incur higher computational costs, and a higher value favors broader exploration but can reduce solution precision. Experimental results show that setting ρ t h to an intermediate value (0.15) achieves the best trade-off, yielding the lowest computation latency and energy consumption.
(2) The Gaussian perturbation coefficient ( α σ ) determines the intensity of random perturbations during the generation of solutions. Too small a value may limit the algorithm’s ability to escape local optima, while too large a value introduces excessive randomness that can slow down convergence. The results confirm that setting α σ to 0.3 enhances both convergence speed and solution diversity.
(3) The fractal crossover probability ( p σ c r o s s ) determines the likelihood of recombination within the fractal subspaces. Higher crossover probabilities promote genetic diversity but can disrupt the preservation of high-quality solutions, whereas lower probabilities may hinder exploration. The experiments indicate that p σ c r o s s = 0.8 achieves the best balance, leading to further improvements in both latency and energy metrics.
These observations highlight that the performance of FS-NSGA-II depends critically on the interplay between partition granularity, perturbation intensity, and genetic diversity mechanisms. Optimal parameter tuning, guided by their underlying roles, is thus essential for robust and efficient optimization. The recommended settings ( ρ t h = 0.15 , α σ = 0.3 , p σ c r o s s = 0.8 ) were adopted in subsequent experiments.
To further assess the advanced nature and scalability of the proposed FS-NSGA-II, a comparative study was conducted against two representative baselines.
(1) NSGA-II [38]: The classical non-dominated sorting genetic algorithm was adopted as the ablation reference for FS-NSGA-II. This comparison enabled a clear identification of the performance gains introduced by the fractal space partitioning mechanism.
(2) Differential Evolution (DE) [45]: DE is a population-based evolutionary algorithm that demonstrates particular competitiveness in large-scale task offloading scenarios. Its robustness and simplicity in balancing exploration and exploitation make it a strong benchmark for multi-objective optimization in computation offloading problems.
Experiments were performed using three different task scales, specifically medium, large, and ultra-large scenarios, to provide a comprehensive evaluation of the performance of algorithms under varying computational loads. For each scenario, all algorithms were independently applied for 30 runs to obtain statistically robust results. Key performance metrics included computation latency and energy consumption, as illustrated in Figure 6.
The results demonstrate that FS-NSGA-II consistently outperforms both NSGA-II and DE across all task scales in terms of both computation latency and energy consumption. As the task scale increases, the superiority of FS-NSGA-II becomes increasingly pronounced. This can be attributed to the fractal space partitioning mechanism, which adaptively refines the granularity of the search space and maintains population diversity, thereby enabling the more effective exploration of the Pareto front in complex, large-scale environments. In the ultra-large task scenario, FS-NSGA-II achieved an average computation latency of 23.08 s, representing a 20.28% reduction compared to NSGA-II and 25.33% compared to DE. Similarly, the mean energy consumption decreased to 12,569.5 J, which is 3.03% and 3.19% lower than the respective baselines. In addition, as shown by the box plots and distribution curves, FS-NSGA-II not only achieved lower median values but also exhibited a tighter distribution with fewer outliers, indicating its greater robustness and solution consistency. The solution sets obtained by FS-NSGA-II displayed a more compact and symmetric distribution, highlighting its superior ability to maintain both convergence and diversity under varying problem scales.
To comprehensively evaluate the diversity of the obtained solution sets, the empirical cumulative distribution function (ECDF) and grouped histograms of crowding degrees are presented for each algorithm under different task scales in Figure 7.
(1) The ECDF curves reveal that FS-NSGA-II exhibits a noticeable rightward shift compared to NSGA-II and DE. This phenomenon arises because fractal space partitioning recursively decomposes the objective space into hierarchical subspaces, constraining genetic operations within locally bounded regions. This structured decomposition prevents solution overcrowding in globally attractive areas and actively allocates search efforts to underexplored regions, thereby maintaining larger minimum distances between neighboring solutions. This indicates that a higher proportion of Pareto solutions in FS-NSGA-II possess larger crowding degrees, reflecting a more uniform and less clustered distribution of solutions. In the ultra-large task scenario, the 100th percentile crowding degree achieved by FS-NSGA-II reached 0.723. In contrast, NSGA-II and especially DE demonstrate rapid saturation in their ECDFs, implying that a large fraction of solutions are tightly packed, which may hinder the effective exploration of the Pareto front and limit the diversity of trade-offs available to the decision-maker.
(2) The grouped histograms further corroborate these findings. FS-NSGA-II achieves an even broader distribution of crowding degrees, with a substantial number of solutions in higher crowding degree intervals. This suggests that the algorithm not only generates a wider spread of solutions but also avoids premature convergence to high-density regions. By contrast, the histograms for NSGA-II and DE are skewed towards lower crowding degree intervals, indicating a tendency for these algorithms to produce more clustered populations and, thus, limited diversity.
Drawing on the superiority and preceding diversity analysis, Table 6 provides a comprehensive quantitative comparison of all methods under different task scales. FS-NSGA-II consistently achieved the best performance, with the lowest mean maximum latency and energy consumption, as well as a higher number of Pareto solutions and greater mean crowding distance across all scales. Its improvements in IGD and HV further confirm their superior convergence and diversity. The standard deviations of key metrics remained low, reflecting the method’s robustness and stability. In addition, FS-NSGA-II maintained competitive runtime efficiency, with average computation times consistently lower than or comparable to the baselines as task size increased.
From a mechanism perspective, these improvements are primarily attributed to the synergy between fractal partitioning and evolutionary operators. The adaptive granularity of space partitioning enables the algorithm to allocate search efforts dynamically to underexplored regions; enhanced crossover and perturbation strategies further promote the dispersion and uniformity of solutions. As a result, FS-NSGA-II produces a well-distributed and robust Pareto front, which is particularly advantageous for real-world multi-objective decision-making scenarios.

6.2. Case Study

Further tests were conducted to evaluate the engineering applicability of the proposed CFHCC architecture. A digital twin manufacturing system for a specific type of cabin segment was selected as the testbed to investigate the practical benefits of the CFHCC architecture. As illustrated in Figure 8, the physical and twin manufacturing spaces are interconnected through a hierarchical computing infrastructure. The system comprises one cloud server, two single-fog nodes, and four fog clusters, each containing four fog nodes. The cloud server is configured with 20 CPU cores, 128 GB of memory, and a 10 Gbps network interface. Each fog node is equipped with a 6-core processor and 8 GB of memory, and the fog clusters are connected via gigabit Ethernet.
Table 7 lists the computational tasks and parameters in the digital twin manufacturing system. Each task consists of several sub-tasks, with explicit data dependencies between them, some of which are designed to be processed in parallel. The data size and computational load of each sub-task are specified according to actual application requirements. These tasks are issued to the digital twin system concurrently at various time points to simulate the dynamic workload of the production environment in a realistic way.
The experiment tested the actual performance of the FS-NSGA-II task offloading optimization method compared to other benchmarks under varying task quantities ranging from 10 to 50; the results are presented in Figure 9. As the task scale expanded, FS-NSGA-II consistently maintained optimal performance with increasing advantages. In large-scale concurrent task offloading environments, the task offloading latency improved by 10.8% and 21.8% compared to NSGA-II and DE, respectively, while computational energy consumption increased by 9.34% and 15.52%, respectively. These results demonstrate that FS-NSGA-II possesses outstanding optimization capabilities in practical application environments.
To further evaluate the performance of collaborative offloading mechanisms, a series of comparative experiments was conducted. The test baselines are presented as follows:
(1) Cloud–Edge Collaborative Computing Framework (CECC) [45]: Tasks can be offloaded either to the cloud or to a single-edge node, but no collaboration occurs among multiple fog nodes.
(2) Distributed Edge Framework (Edge) [14]: Tasks are processed by independent edge nodes without the involvement of the cloud or any inter-fog coordination.
As illustrated in Figure 10, the CFHCC framework demonstrates clear advantages over both CECC and edge frameworks under varying task scales. The CFHCC introduces a fog cluster mechanism that enables parallel processing and dynamic load balancing among multiple fog nodes. This collaborative capability results in reductions of 12.02% and 11.55% for maximum task delay and total energy consumption under large-scale tasks compared to the CECC framework. When compared with the edge framework, CFHCC achieves greater performance stability and efficiency. The absence of cloud participation in edge constrains the system’s capacity to handle computation-intensive or intensive task periods, leading to performance degradation and instability as the task scale grows.
Furthermore, the adoption of the CFHCC framework brings substantial practical benefits to the operation of digital twin systems. For monitoring-oriented tasks, collaborative fog clusters can process and analyze streaming sensor data in parallel, improving computational efficiency by 18.9% compared to the edge framework. For analysis-oriented tasks, the cloud’s abundant computational resources can be utilized for deep learning and large-scale data analysis, while the fog layer ensures timely preprocessing and feedback, improving computational efficiency by 28.3%.
These findings align with previous studies demonstrating the superiority of hierarchical fog–cloud architectures over purely cloud-centric or edge-only approaches [9,10,21]. Although recent edge computing frameworks [5,14] typically employ independent edge nodes with limited local computing capacity, our collaborative fog cluster mechanism enables parallel task processing and dynamic load balancing among multiple fog nodes, resulting in 18.9% efficiency improvement over non-collaborative edge solutions.
While recent work has advocated for fully decentralized edge frameworks to address data privacy concerns, our case study demonstrates that completely eliminating cloud participation leads to significant performance degradation (up to 28.3% efficiency loss for analysis-intensive tasks) and instability under heavy workloads. This diverges from the common assumption in the literature [46] that edge computing inherently trades computational capability for data locality. Our case study demonstrates that appropriately designed fog-level collaboration can simultaneously preserve data privacy at the edge while achieving computational performance comparable to cloud-centric approaches [47], thereby addressing the critical gap between industrial data security requirements and computational efficiency demands.

7. Conclusions and Future Work

This study addresses the critical challenge of efficient task offloading in smart manufacturing environments characterized by complex task dependencies, data sensitivity constraints, and large-scale concurrent processing demands. The principal contributions are summarized as follows:
(1) A novel collaborative computing architecture is proposed, introducing fog cluster mechanisms that enable coordinated multi-node task execution while maintaining data locality. The case study demonstrates that fog cluster collaboration achieves an efficiency improvement of up to 28.3% over non-collaborative edge solutions, effectively reconciling industrial data protection with computational performance requirements as stated in the research objectives.
(2) A comprehensive mathematical model is established that captures the heterogeneous computing capabilities across cloud, single fog, and fog cluster layers, incorporating task dependency constraints, deadline requirements, and energy–latency trade-offs. This formulation extends existing offloading models by explicitly modeling fog cluster collaboration through sub-task decomposition, parallel processing, and result aggregation mechanisms.
(3) A fractal-enhanced multi-objective optimization algorithm is developed that exploits the structural alignment between recursive space partitioning and hierarchical computing architectures. Experimental results demonstrate that FS-NSGA-II achieves 20.28% latency reduction and 3.03% energy savings compared to standard NSGA-II in large-scale scenarios, with a superior diversity of solutions.
Future research will focus on developing adaptive mechanisms that dynamically adjust fog cluster configurations in response to time-varying workload patterns and network conditions, as relatively stable operational environments are assumed in the current framework. A comprehensive sensitivity analysis framework will also be established to quantitatively evaluate system robustness under network volatility and fog cluster scalability scenarios.

Author Contributions

Conceptualization, Z.L. (Zhiwen Lin) and C.C.; methodology, Z.L. (Zhiwen Lin) and J.C.; software, Z.L. (Zhiwen Lin); validation, C.C. and Z.L. (Zhifeng Liu); investigation, J.C.; resources, Z.L. (Zhifeng Liu); writing—original draft preparation, Z.L. (Zhiwen Lin); writing—review and editing, Z.L. (Zhiwen Lin) and J.C.; project administration, Z.L. (Zhifeng Liu). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. U23B20104), the Innovation Consortium Project of Machine Tools and Moulds in Dongguan (NO.20251201500012), the Jilin Province Science and Technology Development Plan (No. YDZJ202401314ZYTS), and the Integrated Project of the National Natural Science Foundation of China (Grant. U24B6007).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sarkar, B.D.; Shardeo, V.; Dwivedi, A.; Pamucar, D. Digital transition from industry 4.0 to industry 5.0 in smart manufacturing: A framework for sustainable future. Technol. Soc. 2024, 78, 102649. [Google Scholar] [CrossRef]
  2. Yang, H.B.; Ong, S.K.; Nee, A.Y.C.; Jiang, G.D.; Mei, X.S. Microservices-based cloud-edge collaborative condition monitoring platform for smart manufacturing systems. Int. J. Prod. Res. 2022, 60, 7492–7501. [Google Scholar] [CrossRef]
  3. Chheang, V.; Narain, S.; Hooten, G.; Cerda, R.; Au, B.; Weston, B.; Giera, B.; Bremer, P.T.; Miao, H.C. Enabling additive manufacturing part inspection of digital twins via collaborative virtual reality. Sci. Rep. 2024, 14, 29783. [Google Scholar] [CrossRef]
  4. Marisetty, H.V.; Fatima, N.; Gupta, M.; Saxena, P. Relationship between resource scheduling and distributed learning in IoT edge computing—An insight into complementary aspects, existing research and future directions. Internet Things 2024, 28, 101375. [Google Scholar] [CrossRef]
  5. Yao, N.F.; Zhao, Y.Q.; Guo, Y.; Kong, S.G. Few-Sample Anomaly Detection in Industrial Images With Edge Enhancement and Cascade Residual Feature Refinement. IEEE Trans. Ind. Inform. 2024, 20, 13975–13985. [Google Scholar] [CrossRef]
  6. Xiao, G.J.; Huang, Y. Equivalent self-adaptive belt grinding for the real-R edge of an aero-engine precision-forged blade. Int. J. Adv. Manuf. Tech. 2016, 83, 1697–1706. [Google Scholar] [CrossRef]
  7. Cai, Z.Y.; Du, X.Y.; Huang, T.H.; Lv, T.R.; Cai, Z.H.; Gong, G.Q. Robotic Edge Intelligence for Energy-Efficient Human-Robot Collaboration. Sustainability 2024, 16, 9788. [Google Scholar] [CrossRef]
  8. Lin, Z.W.; Liu, Z.F.; Yan, J.; Zhang, Y.Z.; Chen, C.H.; Qi, B.B.; Guo, J.Y. Digital thread-driven cloud-fog-edge collaborative disturbance mitigation mechanism for adaptive production in digital twin discrete manufacturing workshop. Int. J. Prod. Res. 2024, 1–29. [Google Scholar] [CrossRef]
  9. Yin, H.Y.; Huang, X.D.; Cao, E.R. A Cloud-Edge-Based Multi-Objective Task Scheduling Approach for Smart Manufacturing Lines. J. Grid Comput. 2024, 22, 9. [Google Scholar] [CrossRef]
  10. Ma, J.; Zhou, H.; Liu, C.C.; E, M.C.; Jiang, Z.Q.; Wang, Q. Study on Edge-Cloud Collaborative Production Scheduling Based on Enterprises With Multi-Factory. IEEE Access 2020, 8, 30069–30080. [Google Scholar] [CrossRef]
  11. Li, H.C.; Cao, Y.P.; Lei, Y.B.; Cao, H.J.; Peng, J.; Jia, Y.C. Energy-aware dynamic rescheduling of flexible manufacturing system using edge-cloud collaborative decision-making method. Int. J. Comput. Integr. Manuf. 2025, 38, 434–449. [Google Scholar] [CrossRef]
  12. Nie, Q.W.; Tang, D.B.; Liu, C.C.; Wang, L.P.; Song, J.Y. A multi-agent and cloud-edge orchestration framework of digital twin for distributed production control. Robot. Comput.-Integr. Manuf. 2023, 82, 102543. [Google Scholar] [CrossRef]
  13. Hong, Z.C.; Qu, T.; Zhang, Y.H.; Zhang, Z.F.; Huang, G.Q. Cloud-fog-edge based computing architechture and a hierarchical decision approach for distributed synchronized manufacturing systems. Adv. Eng. Inform. 2025, 65, 103386. [Google Scholar] [CrossRef]
  14. Li, X.M.; Wan, J.F.; Dai, H.N.; Imran, M.; Xia, M.; Celesti, A. A Hybrid Computing Solution and Resource Scheduling Strategy for Edge Computing in Smart Manufacturing. IEEE Trans. Ind. Inform. 2019, 15, 4225–4234. [Google Scholar] [CrossRef]
  15. Cai, J.; Fu, H.T.; Liu, Y. Deep reinforcement learning-based multitask hybrid computing offloading for multiaccess edge computing. Int. J. Intell. Syst. 2022, 37, 6221–6243. [Google Scholar] [CrossRef]
  16. Wang, M.; Zhang, Y.J.; He, X.; Yu, S.H. Joint scheduling and offloading of computational tasks with time dependency under edge computing networks. Simul. Model. Pract. Theory 2023, 129, 102824. [Google Scholar] [CrossRef]
  17. Chakraborty, C.; Mishra, K.; Majhi, S.K.; Bhuyan, H.K. Intelligent Latency-Aware Tasks Prioritization and Offloading Strategy in Distributed Fog-Cloud of Things. IEEE Trans. Ind. Inform. 2023, 19, 2099–2106. [Google Scholar] [CrossRef]
  18. Ma, S.Y.; Song, S.D.; Yang, L.Y.; Zhao, J.M.; Yang, F.; Zhai, L.B. Dependent tasks offloading based on particle swarm optimization algorithm in multi-access edge computing. Appl. Soft Comput. 2021, 112, 107790. [Google Scholar] [CrossRef]
  19. Mannhardt, F.; Petersen, S.A.; Oliveira, M.F. A trust and privacy framework for smart manufacturing environments. J. Ambient. Intell. Smart Environ. 2019, 11, 201–219. [Google Scholar] [CrossRef]
  20. Wang, Y.H.; Su, S.C.; Wang, Y.W. Attention-augmented multi-agent collaboration for Smart Industrial Internet of Things task offloading. Internet Things 2025, 31, 101572. [Google Scholar] [CrossRef]
  21. Liu, S.F.; Qiao, B.Y.; Han, D.H.; Wu, G. Task offloading method based on CNN-LSTM-attention for cloud-edge-end collaboration system. Internet Things 2024, 26, 101204. [Google Scholar] [CrossRef]
  22. Bernard, L.; Yassa, S.; Alouache, L.; Romain, O. Efficient Pareto based approach for IoT task offloading on Fog-Cloud environments. Internet Things 2024, 27, 101311. [Google Scholar] [CrossRef]
  23. Bandyopadhyay, B.; Kuila, P.; Govil, M.C.; Bey, M. Delay-sensitive task offloading and efficient resource allocation in intelligent edge-cloud environments: A discretized differential evolution-based approach. Appl. Soft Comput. 2024, 159, 111637. [Google Scholar] [CrossRef]
  24. Li, J.J.; Chai, Z.Y.; Li, Y.L.; Zhou, Y.B. Reliable and efficient computation offloading for dependency-aware tasks in IIoT using evolutionary multi-objective optimization. Future Gener. Comput. Syst.-Int. J. Escience 2025, 174, 107923. [Google Scholar] [CrossRef]
  25. Liu, Z.F.; Lin, Z.W.; Zhang, Y.Z.; Chen, C.H.; Guo, J.Y.; Qi, B.B.; Tao, F. Joint Optimization of Computational Tasks Offloading for Efficient and Secure Manufacturing Through Cloud-Fog-Edge-Terminal Architecture. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 1–15. [Google Scholar] [CrossRef]
  26. Laili, Y.; Gong, J.B.; Kong, Y.S.; Wang, F.; Ren, L.; Zhang, L. Communication Intensive Task Offloading With IDMZ for Secure Industrial Edge Computing. IEEE Trans. Cloud Comput. 2025, 13, 560–577. [Google Scholar] [CrossRef]
  27. Laili, Y.; Guo, F.Q.; Ren, L.; Li, X.; Li, Y.L.; Zhang, L. Parallel Scheduling of Large-Scale Tasks for Industrial Cloud-Edge Collaboration. IEEE Internet Things J. 2023, 10, 3231–3242. [Google Scholar] [CrossRef]
  28. AlShathri, S.I.; Chelloug, S.A.; Hassan, D.S.M. Parallel Meta-Heuristics for Solving Dynamic Offloading in Fog Computing. Mathematics 2022, 10, 1258. [Google Scholar] [CrossRef]
  29. Ji, X.F.; Gong, F.M.; Wang, N.L.; Du, C.Z.; Yuan, X.B. Task offloading with enhanced Deep Q-Networks for efficient industrial intelligent video analysis in edge-cloud collaboration. Adv. Eng. Inform. 2024, 62, 102599. [Google Scholar] [CrossRef]
  30. Tang, Z.Y.; Zeng, C.; Zeng, Y.L. Research on data security in industry 4.0 manufacturing industry against the background of privacy protection challenges. Int. J. Comput. Integr. Manuf. 2025, 38, 636–648. [Google Scholar] [CrossRef]
  31. Qi, Q.L.; Tao, F. A Smart Manufacturing Service System Based on Edge Computing, Fog Computing, and Cloud Computing. IEEE Access 2019, 7, 86769–86777. [Google Scholar] [CrossRef]
  32. Bukhari, M.M.; Ghazal, T.M.; Abbas, S.; Khan, M.A.; Farooq, U.; Wahbah, H.; Ahmad, M.; Adnan, K.M. An Intelligent Proposed Model for Task Offloading in Fog-Cloud Collaboration Using Logistics Regression. Comput. Intell. Neurosci. 2022, 2022, 3606068. [Google Scholar] [CrossRef]
  33. Pham, X.Q.; Man, N.D.; Tri, N.D.T.; Thai, N.Q.; Huh, E.N. A cost- and performance-effective approach for task scheduling based on collaboration between cloud and fog computing. Int. J. Distrib. Sens. Netw. 2017, 13, 155014771774207. [Google Scholar] [CrossRef]
  34. Kabeer, M.; Yusuf, I.; Sufi, N.A. Distributed software defined network-based fog to fog collaboration scheme. Parallel Comput. 2023, 117, 103040. [Google Scholar] [CrossRef]
  35. Mukherjee, M.; Kumar, S.; Mavromoustakis, C.X.; Mastorakis, G.; Matam, R.; Kumar, V.; Zhang, Q. Latency-Driven Parallel Task Data Offloading in Fog Computing Networks for Industrial Applications. IEEE Trans. Ind. Inform. 2020, 16, 6050–6058. [Google Scholar] [CrossRef]
  36. Adhikari, M.; Srirama, S.N.; Amgoth, T. Application Offloading Strategy for Hierarchical Fog Environment Through Swarm Optimization. IEEE Internet Things J. 2020, 7, 4317–4328. [Google Scholar] [CrossRef]
  37. Aazam, M.; Zeadally, S.; Harras, K.A. Deploying Fog Computing in Industrial Internet of Things and Industry 4.0. IEEE Trans. Ind. Inform. 2018, 14, 4674–4682. [Google Scholar] [CrossRef]
  38. Cui, L.Z.; Xu, C.; Yang, S.; Huang, J.Z.; Li, J.Q.; Wang, X.Z.; Ming, Z.; Lu, N. Joint Optimization of Energy Consumption and Latency in Mobile Edge Computing for Internet of Things. IEEE Internet Things J. 2019, 6, 4791–4803. [Google Scholar] [CrossRef]
  39. Cai, J.; Liu, W.; Huang, Z.W.; Yu, F.R. Task Decomposition and Hierarchical Scheduling for Collaborative Cloud-Edge-End Computing. IEEE Trans. Serv. Comput. 2024, 17, 4368–4382. [Google Scholar] [CrossRef]
  40. Wang, Y.P.; Zhang, P.; Wang, B.; Zhang, Z.F.; Xu, Y.L.; Lv, B. A hybrid PSO and GA algorithm with rescheduling for task offloading in device-edge-cloud collaborative computing. Clust. Comput.-J. Netw. Softw. Tools Appl. 2025, 28, 101. [Google Scholar] [CrossRef]
  41. Firmin, T.; Talbi, E.G. Massively parallel asynchronous fractal optimization. In Proceedings of the 2023 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), St. Petersburg, FL, USA, 15–19 May 2023; pp. 930–938. [Google Scholar] [CrossRef]
  42. Zhang, X.Y.; Ming, X.G.; Bao, Y.G. A flexible smart manufacturing system in mass personalization manufacturing model based on multi-module-platform, multi-virtual-unit, and multi-production-line. Comput. Ind. Eng. 2022, 171, 108379. [Google Scholar] [CrossRef]
  43. Wang, J.; Li, D.; Hu, Y.M. Fog Nodes Deployment Based on Space-Time Characteristics in Smart Factory. IEEE Trans. Ind. Inform. 2021, 17, 3534–3543. [Google Scholar] [CrossRef]
  44. Lou, P.; Liu, S.Y.; Hu, J.M.; Li, R.Y.; Xiao, Z.; Yan, J.W. Intelligent Machine Tool Based on Edge-Cloud Collaboration. IEEE Access 2020, 8, 139953–139965. [Google Scholar] [CrossRef]
  45. Laili, Y.J.; Wang, X.H.; Zhang, L.; Ren, L. DSAC-configured Differential Evolution for Cloud-Edge-Device Collaborative Task Scheduling. IEEE Trans. Ind. Inform. 2024, 20, 1753–1763. [Google Scholar] [CrossRef]
  46. Lee, C.K.M.; Huo, Y.Z.; Zhang, S.Z.; Ng, K.K.H. Design of a Smart Manufacturing System With the Application of Multi-Access Edge Computing and Blockchain Technology. IEEE Access 2020, 8, 28659–28667. [Google Scholar] [CrossRef]
  47. Vaidya, S.; Jethava, G. Elevating manufacturing excellence with multilevel optimization in smart factory cloud computing using hybrid model. Clust. Comput.-J. Netw. Softw. Tools Appl. 2025, 28, 342. [Google Scholar] [CrossRef]
Figure 1. Cloud–fog hierarchical collaborative computing architecture.
Figure 1. Cloud–fog hierarchical collaborative computing architecture.
Mathematics 13 03691 g001
Figure 2. Deployment of the FCCL.
Figure 2. Deployment of the FCCL.
Mathematics 13 03691 g002
Figure 3. DAG of computational tasks.
Figure 3. DAG of computational tasks.
Mathematics 13 03691 g003
Figure 4. The algorithm flow of FS-NSGA-II.
Figure 4. The algorithm flow of FS-NSGA-II.
Mathematics 13 03691 g004
Figure 5. Parameter sensitivity test of FS-NSGA-II.
Figure 5. Parameter sensitivity test of FS-NSGA-II.
Mathematics 13 03691 g005
Figure 6. The offloading latency and energy consumption of different methods under various task scales.
Figure 6. The offloading latency and energy consumption of different methods under various task scales.
Mathematics 13 03691 g006
Figure 7. Comparison of the diversity distribution of congestion degrees from different methods under various task scales.
Figure 7. Comparison of the diversity distribution of congestion degrees from different methods under various task scales.
Mathematics 13 03691 g007
Figure 8. Computing resources and tasks in the digital twin manufacturing scenario.
Figure 8. Computing resources and tasks in the digital twin manufacturing scenario.
Mathematics 13 03691 g008
Figure 9. The offloading results of different methods under various task scales for actual cases.
Figure 9. The offloading results of different methods under various task scales for actual cases.
Mathematics 13 03691 g009
Figure 10. The offloading results of different computational mode under various task scales for actual cases.
Figure 10. The offloading results of different computational mode under various task scales for actual cases.
Mathematics 13 03691 g010
Table 1. Comparison of representative task offloading studies in distributed computing environments.
Table 1. Comparison of representative task offloading studies in distributed computing environments.
StudyOffloading ModelingOffloading ConstraintsOffloading Methods
FrameworkTask ModelParallel ProcessingData DependencyData SensitivityTask ScaleObjectivesMethods
[14]Distributed edgeIndependentYesNoNoSmallLatencyGreedy Improvement
[18]Distributed edgeDAG-basedYesYesNoLargeLatency, costPSO Improvement
[19]EdgeCheckpointNoYesYesSmallData securityConstraint-based optimization
[20]Edge–Cloud collaborationSimple workflowNoNoNoLargeLatency, energyDeep Reinforcement Learning
[21]Edge–Cloud collaborationQueue-basedNoNoNoMediumLatency, energyCNN-LSTM-Attention
[22]Fog–Cloud collaborationIndependentYesNoNoMediumCost, makespanGA Improvement
[23]Edge–Cloud collaborationQueue-basedNoNoNoLargeLatency, resource utilizationDE Improvement
[24]Distributed edgeDAG-basedNoYesNoLargeLatency, energyMOEA/D-MTDS
[25]Fog–Cloud collaborationDAG-basedYesYesYesMediumLatency, manufacturing riskSSA Improvement
[26]Edge–Cloud collaborationQueue-basedYesNoYesLargeLatency, manufacturing riskDE Improvement
[27]Edge–Cloud collaborationQueue-basedYesYesNoLargeLatency, energyGroup-merged evolutionary
[28]FogIndependentYesYesNoLargeLatency, energyParallel meta-heuristics
[29]Edge–Cloud collaborationIndependentNoYesYesMediumLatencyDeep Q-Network
RequirementsMore flexible collaborationComplex dependenciesEssential for complex tasksEssential for workflowData securityIndustrial scaleMulti-objective balanceMulti-objective hierarchical evolution
Table 2. Symbol.
Table 2. Symbol.
NotationDefinition
ETLExecution terminal layer
FNCLFog node computing layer
FCCLFog cluster computing layer
CCLCloud computing layer
C T i The i-th computational task
X i The i-th sub-task of the computational task
X p r e The predecessor tasks of the X i
N Total number of sub-tasks
s x i The i-th sub-task when X i is divided
L i The i-th execution level
C o r r I S X i The data sensitivity constraints of sub-task X i (0 or 1)
C I The instructions to be processed via executing
B X i The digital stream data size required from industrial site (bits)
D r o u g h X i The source data size of X i
D r e s u l t X i The result data size of X i
fogCThe set of fog nodes co-processing X i
f c m a i n The main fog node responsible for task distribution and result merging
f c i The i-th fog node in the cluster
T t a s k P X i The total execution time of X i for offloading decisions P , where P c l o u d , f o g , f o g C
T p r o c e s s P X i The processing time of X i for offloading decisions P , where P c l o u d , f o g , f o g C
T c o m i ,   j The communication time between i and j
T s e n d c f X i The sending time from CCL to fog node
T r e c e f c X i The receiving time from fog node to CCL
T t a s k s x i The execution time of sub-task s x i
T t r a n s f f s x i The data transmission time of s x i between fog nodes
T d i v i d e X i The task allocation time for the main fog node
T m e r g e X i The result merging time for the main fog node
T m a x X i The maximum tolerable execution time for X i
V p r o c e s s Q The data processing speed of offloading decisions Q , where Q c l o u d , f o g
v t r a n s The transmission speed from CCL to FCL
v t r a n s f f The transmission speed between fog nodes
b w c e The average network bandwidth in an industrial site
E t a s k P X i The total energy cost of X i for offloading decisions P , where P c l o u d , f o g , f o g C
E p r o c e s s P X i The processing energy cost of X i for offloading decisions P , where P c l o u d , f o g , f o g C
E c o m i ,   j The communication energy cost between i and j
E t r a n s i j X i The energy cost of transmission from i to j
Table 3. Evolution of a representative solution during the optimization process.
Table 3. Evolution of a representative solution during the optimization process.
GenerationOffloading DecisionLatency (ms)Energy (J)Improvement
Gen 1 (Initial)[1, 2, 1, 2, 1]69665.94Baseline
Gen 50 (Intermediate)[2, 1, 2, 1, 2]67367.21−3.3% latency
Gen 200 (Converged)[2, 2, 1, 2, 1]65863.85−5.5% latency; −3.2% energy
Table 4. The simulation scene parameter configuration of CFHCC.
Table 4. The simulation scene parameter configuration of CFHCC.
Parameter CategoryParameter NameValue/Range
Computational TaskNumber of tasks50/100/200
Number of sub-tasks per taskRandom [3, 8]
Sub-taskSub-task data sizeRandom [0.5, 2.0] MB
Sub-task instruction countRandom [200, 2000] M
Sub-task execution priorityRandom [1, 5]
Deadline constraint factorRandom [1.2, 3.0]
Available offloading modesFor head sub-tasks: single fog/cluster fog
For tail sub-tasks: cloud/single fog/cluster fog
Sub-task dependencyAll are dependent types
Cloud serverNumber of cloud server1
CPU frequency48 GHz
Memory512 GB
Up/Down bandwidth50 Mbps
Transmission delay to fogRandom [30, 50] ms
Idle/Active power120/300 W
Power efficiency coefficient0.80
Single-fog nodeNumber of fog nodes2
CPU frequency8 GHz
Memory32 GB
Up/Down bandwidth200 Mbps
Intra-cluster communication delayRandom [5, 15] ms
Idle/Active power30/80 W
Power efficiency coefficient0.85
Fog clusterNumber of fog clusters3 clusters and 4 nodes each
CPU frequency4 GHz
Memory16 GB
Up/Down bandwidth100 Mbps
Intra-cluster communication delayRandom [1, 3] ms
Idle/Active power10/30 W
Power efficiency coefficient0.85
Load balancing threshold0.8
Table 5. Parameter configuration of FS-NSGA-II.
Table 5. Parameter configuration of FS-NSGA-II.
Parameter CategoryParameter NameValue
EvolutionaryMaximum iterations G m a x 200
Population size N p o p 20
Elite preservation ratio0.1
Crowding distance threshold0.05
FractalFractal parameter H 3
Fractal partition threshold ρ t h 0.15
Gaussian perturbation coefficient α σ 0.3
Fractal crossover probability p σ c r o s s 0.8
Near-neighbor perturbation ratio q σ n e a r 0.5
Far-neighbor perturbation ratio q σ f a r 0.3
Gaussian perturbation probability p g a u s s σ 0.2
Table 6. Comprehensive comparison results of the performance of various methods under different task scales.
Table 6. Comprehensive comparison results of the performance of various methods under different task scales.
CategoriesMediumLargeUltra-Large
FS-NSGA-IINSGA-IIDEFS-NSGA-IINSGA-IIDEFS-NSGA-IINSGA-IIDE
Mean max latency 5.65486.83077.238911.881213.800613.97327.656630.796630.9254
Latency std0.23920.32570.30360.36090.48050.45680.54980.76650.5663
Mean total energy3082.1033158.00273159.81675863.5995958.61645968.699612,866.250813,038.229713,040.4316
Energy std17.715117.190919.926119.766723.053725.13339.635550.820157.2649
Mean Pareto solutions20.31313.619.313.911.515.51312
Pareto solutions std1.992.493.752.533.33.751.023.552.1
Mean crowding distance12.289111.34999.075618.116714.331418.460628.090223.630225.0344
Crowding distance std3.47956.00454.775.75258.517412.47936.82248.528711.2135
Mean spread8.94519.35139.00210.176918.238616.130323.629822.660128.0581
Spread std3.36672.53343.89457.28410.90267.722711.377814.238113.8086
Mean IGD0.018920.075350.089340.019630.085240.086910.020450.092780.1204
Mean HV29.243143.478329.011648.940884.224885.1995142.5622192.1461177.3987
Mean runtime (s)2.183.414.445.27.527.9815.2922.0123.54
Runtime std0.110.510.610.250.570.550.170.180.33
Table 7. The task list and parameters of the digital twin manufacturing system.
Table 7. The task list and parameters of the digital twin manufacturing system.
Task InformationSub-Task Information
IDDescriptionIDNameData Size (MB)Instruction (M)
T 1 Equipment health monitoring s t 1,1 Raw sensor data acquisition1020
T 1 = { s t 1,1 , ( s t 1,2 , s t 1,3 ) , s t 1,4 , s t 1,5 } s t 1,2 Vibration signal preprocessing530
s t 1,3 Temperature signal preprocessing215
s t 1,4 Fault feature extraction440
s t 1,5 Anomaly detection and reporting110
T 2 Process parameter optimization s t 2,1 Historical process data loading825
T 2 = { s t 2,1 , s t 2,2 , ( s t 2,3 , s t 2,4 ) , s t 2,5 } s t 2,2 Data cleaning and normalization820
s t 2,3 Machine learning modeling6100
s t 2,4 Algorithm parameter tuning4120
s t 2,5 Optimal parameter dispatch15
T 3 Production capacity prediction s t 3,1 Historical production data acq.618
T 3 = { s t 3,1 , s t 3,2 , ( s t 3,3 , s t 3,4 ) , s t 3,5 , s t 3,6 } s t 3,2 Missing value imputation312
s t 3,3 Time series feature extraction530
s t 3,4 Product structure analysis214
s t 3,5 Prediction model inference460
s t 3,6 Result visualization210
T 4 Equipment energy analysis s t 4,1 Raw energy data acquisition722
T 4 = { s t 4,1 , ( s t 4,2 , s t 4,3 ) , s t 4,4 } s t 4,2 Single machine energy analysis425
s t 4,3 Group energy statistics320
s t 4,4 Energy saving opportunity ID335
T 5 Online quality inspection s t 5,1 Image acquisition 1525
T 5 = { s t 5,1 , ( s t 5,2 , s t 5,3 , s t 5,4 ) , s t 5,5 } s t 5,2 Image segmentation1040
s t 5,3 Defect detection1050
s t 5,4 Dimension measurement730
s t 5,5 Result judgment and reporting210
T 6 Production scheduling optim. s t 6,1 Order information acquisition515
T 6 = { s t 6,1 , ( s t 6,2 , s t 6,3 ) , s t 6,4 , s t 6,5 } s t 6,2 Equipment status modeling420
s t 6,3 Process flow modeling422
s t 6,4 Scheduling optimization3110
s t 6,5 Dispatching results distribution15
T 7 Material traceability s t 7,1 Batch data acquisition612
T 7 = { s t 7,1 , s t 7,2 , ( s t 7,3 , s t 7,4 ) , s t 7,5 } s t 7,2 RFID information reading210
s t 7,3 Path reconstruction420
s t 7,4 Exception trace analysis330
s t 7,5 Traceability report generation18
T 8 Process anomaly detection s t 8,1 Process data acquisition816
T 8 = { s t 8,1 , ( s t 8,2 , s t 8,3 ) , s t 8,4 , s t 8,5 } s t 8,2 Process parameter validation418
s t 8,3 Sensor calibration analysis322
s t 8,4 Anomaly detection inference455
s t 8,5 Alarm dispatch15
T 9 Production line twin modeling s t 9,1 Line structure data acquisition1220
T 9 = { s t 9,1 , s t 9,2 , ( s t 9,3 , s t 9,4 , s t 9,5 ) , s t 9,6 } s t 9,2 Equipment parameter acquisition714
s t 9,3 3D modeling data processing1050
s t 9,4 Motion simulation calculation980
s t 9,5 Interactive UI generation770
s t 9,6 Model validation and output422
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Z.; Chen, C.; Chen, J.; Liu, Z. A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing. Mathematics 2025, 13, 3691. https://doi.org/10.3390/math13223691

AMA Style

Lin Z, Chen C, Chen J, Liu Z. A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing. Mathematics. 2025; 13(22):3691. https://doi.org/10.3390/math13223691

Chicago/Turabian Style

Lin, Zhiwen, Chuanhai Chen, Jianzhou Chen, and Zhifeng Liu. 2025. "A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing" Mathematics 13, no. 22: 3691. https://doi.org/10.3390/math13223691

APA Style

Lin, Z., Chen, C., Chen, J., & Liu, Z. (2025). A Hierarchical Fractal Space NSGA-II-Based Cloud–Fog Collaborative Optimization Framework for Latency and Energy-Aware Task Offloading in Smart Manufacturing. Mathematics, 13(22), 3691. https://doi.org/10.3390/math13223691

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop