Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = cloud–edge–end collaboration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 3816 KB  
Article
Dynamic Decision-Making for Resource Collaboration in Complex Computing Networks: A Differential Game and Intelligent Optimization Approach
by Cai Qi and Zibin Zhang
Mathematics 2026, 14(2), 320; https://doi.org/10.3390/math14020320 - 17 Jan 2026
Viewed by 161
Abstract
End–edge–cloud collaboration enables significant improvements in system resource utilization by integrating heterogeneous resources while ensuring application-level quality of service (QoS). However, achieving efficient collaborative decision-making in such architectures poses critical challenges within dynamic and complex computing network environments, including dynamic resource allocation, incentive [...] Read more.
End–edge–cloud collaboration enables significant improvements in system resource utilization by integrating heterogeneous resources while ensuring application-level quality of service (QoS). However, achieving efficient collaborative decision-making in such architectures poses critical challenges within dynamic and complex computing network environments, including dynamic resource allocation, incentive alignment between cloud and edge entities, and multi-objective optimization. To address these issues, this paper proposes a dynamic resource optimization framework for complex cloud–edge collaborative networks, decomposing the problem into two hierarchical decision schemes: cloud-level coordination and edge-side coordination, thereby achieving adaptive resource orchestration across the End–edge–cloud continuum. Furthermore, leveraging differential game theory, we model the dynamic resource allocation and cooperation incentives between cloud and edge nodes, and derive a feedback Nash equilibrium to maximize the overall system utility, effectively resolving the inherent conflicts of interest in cloud–edge collaboration. Additionally, we formulate a joint optimization model for energy consumption and latency, and propose an Improved Discrete Artificial Hummingbird Algorithm (IDAHA) to achieve an optimal trade-off between these competing objectives, addressing the challenge of multi-objective coordination from the user perspective. Extensive simulation results demonstrate that the proposed methods exhibit superior performance in multi-objective optimization, incentive alignment, and dynamic resource decision-making, significantly enhancing the adaptability and collaborative efficiency of complex cloud–edge networks. Full article
(This article belongs to the Special Issue Dynamic Analysis and Decision-Making in Complex Networks)
Show Figures

Figure 1

23 pages, 16288 KB  
Article
End-Edge-Cloud Collaborative Monitoring System with an Intelligent Multi-Parameter Sensor for Impact Anomaly Detection in GIL Pipelines
by Qi Li, Kun Zeng, Yaojun Zhou, Xiongyao Xie and Genji Tang
Sensors 2026, 26(2), 606; https://doi.org/10.3390/s26020606 - 16 Jan 2026
Viewed by 93
Abstract
Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable [...] Read more.
Gas-insulated transmission lines (GILs) are increasingly deployed in dense urban power networks, where complex construction activities may introduce external mechanical impacts and pose risks to pipeline structural integrity. However, existing GIL monitoring approaches mainly emphasize electrical and gas-state parameters, while lightweight solutions capable of rapidly detecting and localizing impact-induced structural anomalies remain limited. To address this gap, this paper proposes an intelligent end-edge-cloud monitoring system for impact anomaly detection in GIL pipelines. Numerical simulations are first conducted to analyze the dynamic response characteristics of the pipeline under impacts of varying magnitudes, orientations, and locations, revealing the relationship between impact scenarios and vibration mode evolution. An end-tier multi-parameter intelligent sensor is then developed, integrating triaxial acceleration and angular velocity measurement with embedded lightweight computing. Laboratory impact experiments are performed to acquire sensor data, which are used to train and validate a multi-class extreme gradient boosting (XGBoost) model deployed at the edge tier for accurate impact-location identification. Results show that, even with a single sensor positioned at the pipeline midpoint, fusing acceleration and angular velocity features enables reliable discrimination of impact regions. Finally, a lightweight cloud platform is implemented for visualizing structural responses and environmental parameters with downsampled edge-side data. The proposed system achieves rapid sensor-level anomaly detection, precise edge-level localization, and unified cloud-level monitoring, offering a low-cost and easily deployable solution for GIL structural health assessment. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

17 pages, 3550 KB  
Article
Edge Intelligence-Based Rail Transit Equipment Inspection System
by Lijia Tian, Hongli Zhao, Li Zhu, Hailin Jiang and Xinjun Gao
Sensors 2026, 26(1), 236; https://doi.org/10.3390/s26010236 - 30 Dec 2025
Viewed by 362
Abstract
The safe operation of rail transit systems relies heavily on the efficient and reliable maintenance of their equipment, as any malfunction or abnormal operation may pose serious risks to transportation safety. Traditional manual inspection methods are often characterized by high costs, low efficiency, [...] Read more.
The safe operation of rail transit systems relies heavily on the efficient and reliable maintenance of their equipment, as any malfunction or abnormal operation may pose serious risks to transportation safety. Traditional manual inspection methods are often characterized by high costs, low efficiency, and susceptibility to human error. To address these limitations, this paper presents a rail transit equipment inspection system based on Edge Intelligence (EI) and 5G technology. The proposed system adopts a cloud–edge–end collaborative architecture that integrates Computer Vision (CV) techniques to automate inspection tasks; specifically, a fine-tuned YOLOv8 model is employed for object detection of personnel and equipment, while a ResNet-18 network is utilized for equipment status classification. By implementing an ETSI MEC-compliant framework on edge servers (NVIDIA Jetson AGX Orin), the system enhances data processing efficiency and network performance, while further strengthening security through the use of a 5G private network that isolates critical infrastructure data from the public internet, and improving robustness via distributed edge nodes that eliminate single points of failure. The proposed solution has been deployed and evaluated in real-world scenarios on Beijing Metro Line 6. Experimental results demonstrate that the YOLOv8 model achieves a mean Average Precision (mAP@0.5) of 92.7% ± 0.4% for equipment detection, and the ResNet-18 classifier attains 95.8% ± 0.3% accuracy in distinguishing normal and abnormal statuses. Compared with a cloud-centric architecture, the EI-based system reduces the average end-to-end latency for anomaly detection tasks by 45% (28.5 ms vs. 52.1 ms) and significantly lowers daily bandwidth consumption by approximately 98.1% (from 40.0 GB to 0.76 GB) through an event-triggered evidence upload strategy involving images and short video clips, highlighting its superior real-time performance, security, robustness, and bandwidth efficiency. Full article
Show Figures

Figure 1

24 pages, 2429 KB  
Article
Secure Streaming Data Encryption and Query Scheme with Electric Vehicle Key Management
by Zhicheng Li, Jian Xu, Fan Wu, Cen Sun, Xiaomin Wu and Xiangliang Fang
Information 2026, 17(1), 18; https://doi.org/10.3390/info17010018 - 25 Dec 2025
Viewed by 306
Abstract
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure [...] Read more.
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure or misuse of which can lead to significant privacy and security threats. This work addresses these challenges by developing a secure and scalable scheme for protecting and verifying streaming data during storage and collaborative analysis. The proposed scheme ensures end-to-end confidentiality, forward security, and integrity verification while supporting efficient encrypted aggregation and fine-grained, time-based authorization. It introduces a lightweight mechanism that hierarchically organizes cryptographic keys and ciphertexts over time, enabling privacy-preserving queries without decrypting individual data points. Building on this foundation, an electric vehicle key management and query system is further designed to integrate the proposed encryption and verification scheme into practical V2X environments. The system supports privacy-preserving data sharing, verifiable statistical analytics, and flexible access control across heterogeneous cloud and edge infrastructures. Analytical and experimental evidence show that the designed system attains rigorous security guarantees alongside excellent efficiency and scalability, rendering it ideal for large-scale electric vehicle data protection and analysis tasks. Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
Show Figures

Graphical abstract

19 pages, 2315 KB  
Article
Client-Attentive Personalized Federated Learning for AR-Assisted Information Push in Power Emergency Maintenance
by Cong Ye, Xiao Li, Zile Lei, Jianlei Wang, Tao Zhang and Sujie Shao
Information 2025, 16(12), 1097; https://doi.org/10.3390/info16121097 - 11 Dec 2025
Viewed by 259
Abstract
The integration of AI into power emergency maintenance faces a critical dilemma: centralized training compromises privacy, while standard Federated Learning (FL) struggles with the statistical heterogeneity (Non-IID) of industrial data. Traditional aggregation algorithms (e.g., FedAvg) treat clients solely based on sample size, failing [...] Read more.
The integration of AI into power emergency maintenance faces a critical dilemma: centralized training compromises privacy, while standard Federated Learning (FL) struggles with the statistical heterogeneity (Non-IID) of industrial data. Traditional aggregation algorithms (e.g., FedAvg) treat clients solely based on sample size, failing to distinguish between critical fault data and redundant normal operational data. To address this theoretical gap, this paper proposes a Client-Attentive Personalized Federated Learning (PFAA) framework. Unlike conventional approaches, PFAA introduces a semantic-aware attention mechanism driven by “Device Health Fingerprints.” This mechanism dynamically quantifies the contribution of each client not just by data volume, but by the quality and physical relevance of their model updates relative to the global optimization objective. We implement this algorithm within a collaborative cloud-edge-end architecture to enable privacy-preserving, AR-assisted fault diagnosis. Extensive simulations demonstrate that PFAA effectively mitigates model divergence caused by data heterogeneity, achieving superior convergence speed and decision accuracy compared to rule-based and standard FL baselines. Full article
Show Figures

Figure 1

20 pages, 1598 KB  
Article
HGA-DP: Optimal Partitioning of Multimodal DNNs Enabling Real-Time Image Inference for AR-Assisted Communication Maintenance on Cloud-Edge-End Systems
by Cong Ye, Ruihang Zhang, Xiao Li, Wenlong Deng, Jianlei Wang and Sujie Shao
Information 2025, 16(12), 1091; https://doi.org/10.3390/info16121091 - 8 Dec 2025
Viewed by 408
Abstract
In the field of communication maintenance, Augmented Reality (AR) applications are critical for enhancing operational safety and efficiency. However, deploying the required multimodal models on resource-constrained terminal devices is challenging, as traditional cloud or on-device strategies fail to balance low latency and energy [...] Read more.
In the field of communication maintenance, Augmented Reality (AR) applications are critical for enhancing operational safety and efficiency. However, deploying the required multimodal models on resource-constrained terminal devices is challenging, as traditional cloud or on-device strategies fail to balance low latency and energy consumption. This paper proposes a Cloud-Edge-End collaborative inference framework tailored to multimodal model deployment. A subgraph partitioning strategy is introduced to systematically decompose complex multimodal models into functionally independent sub-units. Subsequently, a fine-grained performance estimation model is employed to accurately characterize both computation and communication costs across heterogeneous devices. And, a joint optimization problem is formulated to minimize end-to-end inference latency and terminal energy consumption. To solve this problem efficiently, a Hybrid Genetic Algorithm for DNN Partitioning (HGA-DP) evolved over 100 generations is designed, incorporating constraint-aware repair mechanisms and local neighborhood search to navigate the exponential search space of possible deployment combinations. Experimental results on a simulated three-tier collaborative computing platform demonstrate that, compared to traditional full on-device deployment, the proposed method reduces end-to-end inference latency by 70–80% and terminal energy consumption by 81.1%, achieving a 4.86× improvement in overall fitness score. Against the latency-optimized DADS heuristic, HGA-DP achieves 41.3% lower latency while reducing energy by 59.9%. Compared to the All-Cloud strategy, our approach delivers 71.5% latency reduction with only marginal additional terminal energy cost. This framework provides an adaptive and effective solution for real-time multimodal inference in resource-constrained scenarios, laying a foundation for intelligent, resource-aware deployment. Full article
Show Figures

Figure 1

24 pages, 8349 KB  
Article
AI- and Security-Empowered End–Edge–Cloud Modular Platform in Complex Industrial Processes: A Case Study on Municipal Solid Waste Incineration
by Jian Tang, Tianzheng Wang, Hao Tian and Wen Yu
Sensors 2025, 25(22), 6973; https://doi.org/10.3390/s25226973 - 14 Nov 2025
Cited by 1 | Viewed by 640
Abstract
Achieving long-term stable optimization in complex industrial processes (CIPs) is notoriously challenging due to their unclear physical/chemical reaction mechanisms, fluctuating operating conditions, and stringent regulatory constraints. A significant gap persists between promising artificial intelligence (AI) algorithms developed in academic research and their practical [...] Read more.
Achieving long-term stable optimization in complex industrial processes (CIPs) is notoriously challenging due to their unclear physical/chemical reaction mechanisms, fluctuating operating conditions, and stringent regulatory constraints. A significant gap persists between promising artificial intelligence (AI) algorithms developed in academic research and their practical deployment in industrial actual processes. To bridge this gap, this article introduces the AI- and security-empowered end–edge–cloud modular platform (AISE3CMP). It consists of four systems such as whole-process AI modeling, end-side basic loop and AI-assisted decision-making, edge-side security isolation and AI control, and cloud-side security transmission and AI optimization. The data isolation collection module of the platform was deployed at a municipal solid waste incineration (MSWI) power plant in Beijing, where it collected multimodal data from real-world industrial sites. The platform’s functionality and effectiveness were validated through the software and hardware developed at the Smart Environmental Protection Beijing Laboratory. The experimental results show efficient and reliable signal transmission between the systems, confirming the platform’s ability to meet the computational demands of AI-based optimization and control algorithms. Compared to previous platforms, AISE3CMP features a dual-security transmission mechanism to mitigate data exchange risks and a modular design to enhance integration efficiency. To the best of our knowledge, this platform is the first prototype of a portable, end-to-end cloud platform with a dual-layer security mechanism for CIPs. While the platform effectively addresses data transmission security, further strengthening of cloud-side data protection and ensuring operational safety on the end-side remain significant challenges for the future. Additionally, utilizing this architecture to enable multi-region and multi-plant data sharing, in order to develop industry-specific large language models, represents a key research direction. Full article
Show Figures

Figure 1

30 pages, 14021 KB  
Article
LLM-LCSA: LLM for Collaborative Control and Decision Optimization in UAV Cluster Security
by Hua Song, Zheng Yang, Haitao Du, Yuting Zhang, Jie Zeng and Xinxin He
Drones 2025, 9(11), 779; https://doi.org/10.3390/drones9110779 - 9 Nov 2025
Cited by 1 | Viewed by 3007
Abstract
With the development of unmanned aerial vehicle (UAV) technology, multimachine collaborative operations have become the core model for increasing mission effectiveness. However, large-scale UAV clusters face challenges such as dynamic security threats, heterogeneous data fusion difficulties, and resource-constrained decision-making delays. Traditional single-machine intelligent [...] Read more.
With the development of unmanned aerial vehicle (UAV) technology, multimachine collaborative operations have become the core model for increasing mission effectiveness. However, large-scale UAV clusters face challenges such as dynamic security threats, heterogeneous data fusion difficulties, and resource-constrained decision-making delays. Traditional single-machine intelligent architectures have limitations when addressing new threats, such as insufficient real-time response capabilities. To address these issues, this paper presnts an LLM-layered collaborative security architecture (LLM-LCSA) for multimachine collaborative security. This architecture optimizes the spatiotemporal fusion efficiency of multisource asynchronous data through cloud–edge–end collaborative deployment, combining an end lightweight LLM, an edge medium LLM, and a cloud-based foundation LLM. Additionally, a Mixture of Experts (MoEs) intelligent algorithm that dynamically activates the most relevant expert models by leveraging a threat–expert association matrix is introduced, thereby increasing the accuracy of complex threat identification and dynamic adaptability. Moreover, a resource-aware multi-objective optimization model is constructed to generate optimal decisions under resource constraints. Simulation results indicate that compared with traditional methods, LLM-LCSA achieves an average 7.92% improvement in the threat detection accuracy, reduces the system’s total response time by 44.52%, and enables resource scheduling during off-peak periods. This architecture provides an efficient, intelligent, and scalable solution for secure collaboration among UAV swarms. Future research should further explore its application potential in 6G network integration and large-scale swarm environments. Full article
(This article belongs to the Special Issue Advances in AI Large Models for Unmanned Aerial Vehicles)
Show Figures

Figure 1

44 pages, 1049 KB  
Review
Toward Intelligent AIoT: A Comprehensive Survey on Digital Twin and Multimodal Generative AI Integration
by Xiaoyi Luo, Aiwen Wang, Xinling Zhang, Kunda Huang, Songyu Wang, Lixin Chen and Yejia Cui
Mathematics 2025, 13(21), 3382; https://doi.org/10.3390/math13213382 - 23 Oct 2025
Cited by 1 | Viewed by 2239
Abstract
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity [...] Read more.
The Artificial Intelligence of Things (AIoT) is rapidly evolving from basic connectivity to intelligent perception, reasoning, and decision making across domains such as healthcare, manufacturing, transportation, and smart cities. Multimodal generative AI (GAI) and digital twins (DTs) provide complementary solutions. DTs deliver high-fidelity virtual replicas for real-time monitoring, simulation, and optimization with GAI enhancing cognition, cross-modal understanding, and the generation of synthetic data. This survey presents a comprehensive overview of DT–GAI integration in the AIoT. We review the foundations of DTs and multimodal GAI and highlight their complementary roles. We further introduce the Sense–Map–Generate–Act (SMGA) framework, illustrating their interaction through the SMGA loop. We discuss key enabling technologies, including multimodal data fusion, dynamic DT evolution, and cloud–edge–end collaboration. Representative application scenarios, including smart manufacturing, smart cities, autonomous driving, and healthcare, are examined to demonstrate their practical impact. Finally, we outline open challenges, including efficiency, reliability, privacy, and standardization, and we provide directions for future research toward sustainable, trustworthy, and intelligent AIoT systems. Full article
Show Figures

Figure 1

28 pages, 1459 KB  
Article
Research on Computing Power Resources-Based Clustering Methods for Edge Computing Terminals
by Jian Wang, Jiali Li, Xianzhi Cao, Chang Lv and Liusong Yang
Appl. Sci. 2025, 15(20), 11285; https://doi.org/10.3390/app152011285 - 21 Oct 2025
Viewed by 627
Abstract
In the “cloud–edge–end” three-tier architecture of edge computing, the cloud, edge layer, and end-device layer collaborate to enable efficient data processing and task allocation. Certain computation-intensive tasks are decomposed into subtasks at the edge layer and assigned to terminal devices for execution. However, [...] Read more.
In the “cloud–edge–end” three-tier architecture of edge computing, the cloud, edge layer, and end-device layer collaborate to enable efficient data processing and task allocation. Certain computation-intensive tasks are decomposed into subtasks at the edge layer and assigned to terminal devices for execution. However, existing research has primarily focused on resource scheduling, paying insufficient attention to the specific requirements of tasks for computing and storage resources, as well as to constructing terminal clusters tailored to the needs of different subtasks.This study proposes a multi-objective optimization-based cluster construction method to address this gap, aiming to form matched clusters for each subtask. First, this study integrates the computing and storage resources of nodes into a unified concept termed the computing power resources of terminal nodes. A computing power metric model is then designed to quantitatively evaluate the heterogeneous resources of terminals, deriving a comprehensive computing power value for each node to assess its capability. Building upon this model, this study introduces an improved NSGA-III (Non-dominated Sorting Genetic Algorithm III) clustering algorithm. This algorithm incorporates simulated annealing and adaptive genetic operations to generate the initial population and employs a differential mutation strategy in place of traditional methods, thereby enhancing optimization efficiency and solution diversity. The experimental results demonstrate that the proposed algorithm consistently outperformed the optimal baseline algorithm across most scenarios, achieving average improvements of 18.07%, 7.82%, 15.25%, and 10% across the four optimization objectives, respectively. A comprehensive comparative analysis against multiple benchmark algorithms further confirms the marked competitiveness of the method in multi-objective optimization. This approach enables more efficient construction of terminal clusters adapted to subtask requirements, thereby validating its efficacy and superior performance. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

29 pages, 9032 KB  
Article
Multi-Agent Deep Reinforcement Learning for Joint Task Offloading and Resource Allocation in IIoT with Dynamic Priorities
by Yongze Ma, Yanqing Zhao, Yi Hu, Xingyu He and Sifang Feng
Sensors 2025, 25(19), 6160; https://doi.org/10.3390/s25196160 - 4 Oct 2025
Cited by 1 | Viewed by 2521
Abstract
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge [...] Read more.
The rapid growth of Industrial Internet of Things (IIoT) terminals has resulted in tasks exhibiting increased concurrency, heterogeneous resource demands, and dynamic priorities, significantly increasing the complexity of task scheduling in edge computing. Cloud–edge–end collaborative computing leverages cross-layer task offloading to alleviate edge node resource contention and improve task scheduling efficiency. However, existing methods generally neglect the joint optimization of task offloading, resource allocation, and priority adaptation, making it difficult to balance task execution and resource utilization under resource-constrained and competitive conditions. To address this, this paper proposes a two-stage dynamic-priority-aware joint task offloading and resource allocation method (DPTORA). In the first stage, an improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm integrated with a Priority-Gated Attention Module (PGAM) enhances the robustness and accuracy of offloading strategies under dynamic priorities; in the second stage, the resource allocation problem is formulated as a single-objective convex optimization task and solved globally using the Lagrangian dual method. Simulation results show that DPTORA significantly outperforms existing multi-agent reinforcement learning baselines in terms of task latency, energy consumption, and the task completion rate. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 1572 KB  
Article
Collaborative Optimization of Cloud–Edge–Terminal Distribution Networks Combined with Intelligent Integration Under the New Energy Situation
by Fei Zhou, Chunpeng Wu, Yue Wang, Qinghe Ye, Zhenying Tai, Haoyi Zhou and Qingyun Sun
Mathematics 2025, 13(18), 2924; https://doi.org/10.3390/math13182924 - 10 Sep 2025
Viewed by 923
Abstract
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to [...] Read more.
The complex electricity consumption situation on the customer side and large-scale wind and solar power generation have gradually shifted the traditional “source-follow-load” model in the power system towards the “source-load interaction” model. At present, the voltage regulation methods require excessive computing resources to accurately predict the fluctuating load under the new energy structure. However, with the development of artificial intelligence and cloud computing, more methods for processing big data have emerged. This paper proposes a new method for electricity consumption analysis that combines traditional mathematical statistics with machine learning to overcome the limitations of non-intrusive load detection methods and develop a distributed optimization of cloud–edge–device distribution networks based on electricity consumption. Aiming at problems such as overfitting and the demand for accurate short-term renewable power generation prediction, it is proposed to use the long short-term memory method to process time series data, and an improved algorithm is developed in combination with error feedback correction. The R2 value of the coupling algorithm reaches 0.991, while the values of RMSE, MAPE and MAE are 1347.2, 5.36 and 199.4, respectively. Power prediction cannot completely eliminate errors. It is necessary to combine the consistency algorithm to construct the regulation strategy. Under the regulation strategy, stability can be achieved after 25 iterations, and the optimal regulation is obtained. Finally, the cloud–edge–device distributed coevolution model of the power grid is obtained to achieve the economy of power grid voltage control. Full article
Show Figures

Figure 1

22 pages, 3203 KB  
Article
Task Offloading Strategy of Multi-Objective Optimization Algorithm Based on Particle Swarm Optimization in Edge Computing
by Liping Yang, Shengyu Wang, Wei Zhang, Bin Jing, Xiaoru Yu, Ziqi Tang and Wei Wang
Appl. Sci. 2025, 15(17), 9784; https://doi.org/10.3390/app15179784 - 5 Sep 2025
Cited by 1 | Viewed by 2540
Abstract
With the rapid development of edge computing and deep learning, the efficient deployment of deep neural networks (DNNs) on resource-constrained terminal devices faces multiple challenges (background), such as execution delay, high energy consumption, and resource allocation costs. This study proposes an improved Multi-Objective [...] Read more.
With the rapid development of edge computing and deep learning, the efficient deployment of deep neural networks (DNNs) on resource-constrained terminal devices faces multiple challenges (background), such as execution delay, high energy consumption, and resource allocation costs. This study proposes an improved Multi-Objective Particle Swarm Optimization (MOPSO) algorithm for PSO. Unlike the conventional PSO, our approach integrates a historical optimal solution detection mechanism and a dynamic temperature regulation strategy to overcome its limitations in this application scenario. First, an end–edge–cloud collaborative computing framework is constructed. Within this framework, a multi-objective optimization model is established, aiming to minimize time delay, energy consumption, and cloud configuration cost. To solve this model, an optimization method is designed that integrates a historical optimal solution detection mechanism and a dynamic temperature regulation strategy into the MOPSO algorithm. Experiments on six types of DNNs, including the Visual Geometry Group (VGG) series, have shown that this algorithm reduces execution time by an average of 58.6%, the average energy consumption by 61.8%, and optimizes cloud configuration costs by 36.1% compared to traditional offloading strategies. Its Global Search Capability Index (GSCI) reaches 92.3%, which is 42.6% higher than the standard PSO algorithm. This method provides an efficient, secure, and stable cooperative computing solution for multi-constraint task unloading in an edge computing environment. Full article
Show Figures

Figure 1

23 pages, 4093 KB  
Article
Multi-Objective Optimization with Server Load Sensing in Smart Transportation
by Youjian Yu, Zhaowei Song and Qinghua Zhang
Appl. Sci. 2025, 15(17), 9717; https://doi.org/10.3390/app15179717 - 4 Sep 2025
Viewed by 745
Abstract
The rapid development of telematics technology has greatly supported high-computing applications like autonomous driving and real-time road condition prediction. However, the limited computational resources and dynamic topology of in-vehicle terminals pose challenges such as delay, load imbalance, and bandwidth consumption. To address these, [...] Read more.
The rapid development of telematics technology has greatly supported high-computing applications like autonomous driving and real-time road condition prediction. However, the limited computational resources and dynamic topology of in-vehicle terminals pose challenges such as delay, load imbalance, and bandwidth consumption. To address these, a three-layer vehicular network architecture based on cloud–edge–end collaboration was proposed, with V2X technology used for multi-hop transmission. Models for delay, energy consumption, and edge caching were designed to meet the requirements for low delay, energy efficiency, and effective caching. Additionally, a dynamic pricing model for edge resources, based on load-awareness, was proposed to balance service quality and cost-effectiveness. The enhanced NSGA-III algorithm (ADP-NSGA-III) was applied to optimize system delay, energy consumption, and system resource pricing. The experimental results (mean of 30 independent runs) indicate that, compared with the NSGA-II, NSGA-III, MOEA-D, and SPEA2 optimization schemes, the proposed scheme reduced system delay by 21.63%, 5.96%, 17.84%, and 8.30%, respectively, in a system with 55 tasks. The energy consumption was reduced by 11.87%, 7.58%, 15.59%, and 9.94%, respectively. Full article
Show Figures

Figure 1

29 pages, 5213 KB  
Article
Design and Implementation of a Novel Intelligent Remote Calibration System Based on Edge Intelligence
by Quan Wang, Jiliang Fu, Xia Han, Xiaodong Yin, Jun Zhang, Xin Qi and Xuerui Zhang
Symmetry 2025, 17(9), 1434; https://doi.org/10.3390/sym17091434 - 3 Sep 2025
Viewed by 1001
Abstract
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a [...] Read more.
Calibration of power equipment has become an essential task in modern power systems. This paper proposes a distributed remote calibration prototype based on a cloud–edge–end architecture by integrating intelligent sensing, Internet of Things (IoT) communication, and edge computing technologies. The prototype employs a high-precision frequency-to-voltage conversion module leveraging satellite signals to address traceability and value transmission challenges in remote calibration, thereby ensuring reliability and stability throughout the process. Additionally, an environmental monitoring module tracks parameters such as temperature, humidity, and electromagnetic interference. Combined with video surveillance and optical character recognition (OCR), this enables intelligent, end-to-end recording and automated data extraction during calibration. Furthermore, a cloud-edge task scheduling algorithm is implemented to offload computational tasks to edge nodes, maximizing resource utilization within the cloud–edge collaborative system and enhancing service quality. The proposed prototype extends existing cloud–edge collaboration frameworks by incorporating calibration instruments and sensing devices into the network, thereby improving the intelligence and accuracy of remote calibration across multiple layers. Furthermore, this approach facilitates synchronized communication and calibration operations across symmetrically deployed remote facilities and reference devices, providing solid technical support to ensure that measurement equipment meets the required precision and performance criteria. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

Back to TopTop