Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = cloud migration complexity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4986 KB  
Article
DI-WOA: Symmetry-Aware Dual-Improved Whale Optimization for Monetized Cloud Compute Scheduling with Dual-Rollback Constraint Handling
by Yuanzhe Kuang, Zhen Zhang and Hanshen Li
Symmetry 2026, 18(2), 303; https://doi.org/10.3390/sym18020303 - 6 Feb 2026
Viewed by 137
Abstract
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of [...] Read more.
With the continuous growth in the scale of engineering simulation and intelligent manufacturing workflows, more and more problem-solving tasks are migrating to cloud computing platforms to obtain elastic computing power. However, a core operational challenge for cloud platforms lies in the difficulty of stably obtaining high-quality scheduling solutions that are both efficient and free of symmetric redundancy, due to the coupling of multiple constraints, partial resource interchangeability, inconsistent multi-objective evaluation scales, and heterogeneous resource fluctuations. To address this, this paper proposes a Dual-Improved Whale Optimization Algorithm (DI-WOA) accompanied by a modeling framework featuring discrete–continuous divide-and-conquer modeling, a unified monetization mechanism of the objective function, and separation of soft/hard constraints; its iterative trajectory follows an augmented Lagrangian dual-rollback mechanism, while being rooted in a three-layer “discrete gene–real-valued encoding–decoder” structure. Scalability experiments show that as the number of tasks J increases, the DI-WOA ranks optimal or sub-optimal at most scale points, indicating its effectiveness in reducing unified billing costs even under intensified task coupling and resource contention. Ablation experiment results demonstrate that the complete DI-WOA achieves final objective values (OBJ) 8.33%, 5.45%, and 13.31% lower than the baseline, the variant without dual update (w/o dual), and the variant without perturbation (w/o perturb), respectively, significantly enhancing convergence performance and final solution quality on this scheduling model. In robustness experiments, the DI-WOA exhibits the lowest or second-lowest OBJ and soft constraint violation, indicating higher controllability under perturbations. In multi-workload generalization experiments, the DI-WOA achieves the optimal or sub-optimal mean OBJ across all scenarios with H = 3/4, leading the sub-optimal algorithm by up to 13.85%, demonstrating good adaptability to workload variations. A comprehensive analysis of the experimental results reveals that the DI-WOA holds practical significance for stably solving high-quality scheduling problems that are efficient and free of symmetric redundancy in complex and diverse environments. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

26 pages, 2431 KB  
Article
Multi-Objective Deep Reinforcement Learning for Dynamic Task Scheduling Under Time-of-Use Electricity Price in Cloud Data Centers
by Xiao Liao, Yiqian Li, Luyao Liu, Lihao Deng, Jinlong Hu and Xiaofei Wu
Electronics 2026, 15(1), 232; https://doi.org/10.3390/electronics15010232 - 4 Jan 2026
Viewed by 414
Abstract
The high energy consumption and substantial electricity costs of cloud data centers pose significant challenges related to carbon emissions and operational expenses for service providers. The temporal variability of electricity pricing in real-world scenarios adds complexity to this problem while simultaneously offering novel [...] Read more.
The high energy consumption and substantial electricity costs of cloud data centers pose significant challenges related to carbon emissions and operational expenses for service providers. The temporal variability of electricity pricing in real-world scenarios adds complexity to this problem while simultaneously offering novel opportunities for mitigation. This study addresses the task scheduling optimization problem under time-of-use pricing conditions in cloud computing environments by proposing an innovative task scheduling approach. To balance the three competing objectives of electricity cost, energy consumption, and task delay, we formulate a price-aware, multi-objective task scheduling optimization problem and establish a Markov decision process model. By integrating prioritized experience replay with a multi-objective preference vector selection mechanism, we design a dynamic, multi-objective deep reinforcement learning algorithm named TEPTS. The simulation results demonstrate that TEPTS achieves superior convergence and diversity compared to three other multi-objective optimization methods while exhibiting excellent scalability across varying test durations and system workload intensities. Specifically, under the TOU pricing scenario, the task migration rate during peak periods exceeds 33.90%, achieving a 13.89% to 36.89% reduction in energy consumption and a 14.09% to 45.33% reduction in electricity costs. Full article
Show Figures

Figure 1

25 pages, 33156 KB  
Article
Combining Ground Penetrating Radar and a Terrestrial Laser Scanner to Constrain EM Velocity: A Novel Approach for Masonry Wall Characterization in Cultural Heritage Applications
by Giorgio Alaia, Maurizio Ercoli, Raffaella Brigante, Laura Marconi, Nicola Cavalagli and Fabio Radicioni
Remote Sens. 2026, 18(1), 15; https://doi.org/10.3390/rs18010015 - 20 Dec 2025
Viewed by 592
Abstract
In this paper, the combined use of Ground Penetrating Radar (GPR) and a Terrestrial Laser Scanner (TLS) is illustrated to highlight multiple advantages arising from the integration of these two distinct Non-Destructive Testing (NDT) techniques in the investigation of a historical wall. In [...] Read more.
In this paper, the combined use of Ground Penetrating Radar (GPR) and a Terrestrial Laser Scanner (TLS) is illustrated to highlight multiple advantages arising from the integration of these two distinct Non-Destructive Testing (NDT) techniques in the investigation of a historical wall. In particular, thanks to the TLS point cloud, a precise evaluation of the medium’s thickness, as well as its irregularities, was carried out. Based on this accurate geometrical constraint, a first-order velocity model, to be used for a time-to-depth conversion and for a post-stack GPR data migration, was computed. Moreover, a joint visualization of both datasets (GPR and TLS) was achieved in a novel tridimensional workspace. This solution provided a more straightforward and efficient way of testing the reliability of the combined results, proving the efficiency of the proposed method in the estimation of a velocity model, especially in comparison to conventional GPR methods. This demonstrates how the integration of different remote sensing methodologies can yield a more solid interpretation, taking into account the uncertainties related to the geometrical irregularities of the external wall’s surface and the inner structure generating complex GPR signatures. Full article
Show Figures

Figure 1

44 pages, 1716 KB  
Article
Creating Automated Microsoft Bicep Application Infrastructure from GitHub in the Azure Cloud
by Vladislav Manolov, Daniela Gotseva and Nikolay Hinov
Future Internet 2025, 17(8), 359; https://doi.org/10.3390/fi17080359 - 7 Aug 2025
Viewed by 2004
Abstract
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps [...] Read more.
Infrastructure as code (IaC) is essential for modern cloud development, enabling teams to define, deploy, and manage infrastructure in a consistent and repeatable manner. As organizations migrate to Azure, selecting the right approach is crucial for managing complexity, minimizing errors, and supporting DevOps practices. This paper examines the use of Azure Bicep with GitHub Actions to automate infrastructure deployment for an application in the Azure cloud. It explains how Bicep improves readability, modularity, and integration compared to traditional ARM templates and other automation tools. The solution utilizes a modular Bicep design to deploy resources, including virtual networks, managed identities, container apps, databases, and AI services, with environment-specific parameters for development, QA, and production. GitHub Actions workflows automate the building, deployment, and tearing down of infrastructure, ensuring consistent deployments across environments. Security considerations include managed identities, private networking, and secret management in continuous integration (CI) and continuous delivery (CD) pipelines. This paper provides a detailed architectural overview, workflow analysis, and implementation guidance to help teams adopt a robust, automated approach to Azure infrastructure deployment. By leveraging automation tooling and modern DevOps practices, organizations can streamline delivery and maintain secure, maintainable cloud environments. Full article
(This article belongs to the Special Issue IoT, Edge, and Cloud Computing in Smart Cities, 2nd Edition)
Show Figures

Graphical abstract

19 pages, 650 KB  
Article
LEMAD: LLM-Empowered Multi-Agent System for Anomaly Detection in Power Grid Services
by Xin Ji, Le Zhang, Wenya Zhang, Fang Peng, Yifan Mao, Xingchuang Liao and Kui Zhang
Electronics 2025, 14(15), 3008; https://doi.org/10.3390/electronics14153008 - 28 Jul 2025
Cited by 5 | Viewed by 4840
Abstract
With the accelerated digital transformation of the power industry, critical infrastructures such as power grids are increasingly migrating to cloud-native architectures, leading to unprecedented growth in service scale and complexity. Traditional operation and maintenance (O&M) methods struggle to meet the demands for real-time [...] Read more.
With the accelerated digital transformation of the power industry, critical infrastructures such as power grids are increasingly migrating to cloud-native architectures, leading to unprecedented growth in service scale and complexity. Traditional operation and maintenance (O&M) methods struggle to meet the demands for real-time monitoring, accuracy, and scalability in such environments. This paper proposes a novel service performance anomaly detection system based on large language models (LLMs) and multi-agent systems (MAS). By integrating the semantic understanding capabilities of LLMs with the distributed collaboration advantages of MAS, we construct a high-precision and robust anomaly detection framework. The system adopts a hierarchical architecture, where lower-layer agents are responsible for tasks such as log parsing and metric monitoring, while an upper-layer coordinating agent performs multimodal feature fusion and global anomaly decision-making. Additionally, the LLM enhances the semantic analysis and causal reasoning capabilities for logs. Experiments conducted on real-world data from the State Grid Corporation of China, covering 1289 service combinations, demonstrate that our proposed system significantly outperforms traditional methods in terms of the F1-score across four platforms, including customer services and grid resources (achieving up to a 10.3% improvement). Notably, the system excels in composite anomaly detection and root cause analysis. This study provides an industrial-grade, scalable, and interpretable solution for intelligent power grid O&M, offering a valuable reference for the practical implementation of AIOps in critical infrastructures. Evaluated on real-world data from the State Grid Corporation of China (SGCC), our system achieves a maximum F1-score of 88.78%, with a precision of 92.16% and recall of 85.63%, outperforming five baseline methods. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

18 pages, 3600 KB  
Article
Long-Term Snow Cover Change in the Qilian Mountains (1986–2024): A High-Resolution Landsat-Based Analysis
by Enwei Huang, Guofeng Zhu, Yuhao Wang, Rui Li, Yuxin Miao, Xiaoyu Qi, Qingyang Wang, Yinying Jiao, Qinqin Wang and Ling Zhao
Remote Sens. 2025, 17(14), 2497; https://doi.org/10.3390/rs17142497 - 18 Jul 2025
Cited by 2 | Viewed by 1406
Abstract
Snow cover, as a critical component of the cryosphere, serves as a vital water resource for arid regions in Northwest China. The Qilian Mountains (QLM), situated on the northeastern margin of the Tibetan Plateau, function as an important ecological barrier and water conservation [...] Read more.
Snow cover, as a critical component of the cryosphere, serves as a vital water resource for arid regions in Northwest China. The Qilian Mountains (QLM), situated on the northeastern margin of the Tibetan Plateau, function as an important ecological barrier and water conservation area in western China. This study presents the first high-resolution historical snow cover product developed specifically for the QLM, utilizing a multi-level snow classification algorithm tailored to the complex topography of the region. By employing Landsat satellite data from 1986–2024, we constructed a comprehensive 39-year snow cover dataset at a resolution of 30 m. A dual adaptive cloud masking strategy and spatial interpolation techniques were employed to effectively address cloud contamination and data gaps prevalent in mountainous regions. The spatiotemporal characteristics and driving mechanisms of snow cover changes in the QLM were systematically analyzed using Sen–Theil trend analysis and Mann–Kendall tests. The results reveal the following: (1) The mean annual snow cover extent in the QLM was 15.73% during 1986–2024, exhibiting a slight declining trend (−0.046% yr−1), though statistically insignificant (p = 0.215); (2) The snowline showed significant upward migration, with mean elevation and minimum elevation rising at rates of 3.98 m yr−1 and 2.81 m yr−1, respectively; (3) Elevation-dependent variations were observed, with significant snow cover decline in high-altitude (>5000 m) and low-altitude (2000–3500 m) regions, while mid-altitude areas remained relatively stable; (4) Comparison with MODIS data demonstrated good correlation (r = 0.828) but revealed systematic differences (RMSE = 12.88%), with MODIS showing underestimation in mountainous environments (Bias: −8.06%). This study elucidates the complex response mechanisms of the QLM snow system under global warming, providing scientific evidence for regional water resource management and climate change adaptation strategies. Full article
(This article belongs to the Special Issue Application of Remote Sensing in Snow and Ice Monitoring)
Show Figures

Graphical abstract

30 pages, 34212 KB  
Article
Spatiotemporal Mapping and Driving Mechanism of Crop Planting Patterns on the Jianghan Plain Based on Multisource Remote Sensing Fusion and Sample Migration
by Pengnan Xiao, Yong Zhou, Jianping Qian, Yujie Liu and Xigui Li
Remote Sens. 2025, 17(14), 2417; https://doi.org/10.3390/rs17142417 - 12 Jul 2025
Cited by 1 | Viewed by 1134
Abstract
The accurate mapping of crop planting patterns is vital for sustainable agriculture and food security, particularly in regions with complex cropping systems and limited cloud-free observations. This research focuses on the Jianghan Plain in southern China, where diverse planting structures and persistent cloud [...] Read more.
The accurate mapping of crop planting patterns is vital for sustainable agriculture and food security, particularly in regions with complex cropping systems and limited cloud-free observations. This research focuses on the Jianghan Plain in southern China, where diverse planting structures and persistent cloud cover make consistent monitoring challenging. We integrated multi-temporal Sentinel-2 and Landsat-8 imagery from 2017 to 2021 on the Google Earth Engine platform and applied a sample migration strategy to construct multi-year training data. A random forest classifier was used to identify nine major planting patterns at a 10 m resolution. The classification achieved an average overall accuracy of 88.3%, with annual Kappa coefficients ranging from 0.81 to 0.88. A spatial analysis revealed that single rice was the dominant pattern, covering more than 60% of the area. Temporal variations in cropping patterns were categorized into four frequency levels (0, 1, 2, and 3 changes), with more dynamic transitions concentrated in the central-western and northern subregions. A multiscale geographically weighted regression (MGWR) model revealed that economic and production-related factors had strong positive associations with crop planting patterns, while natural factors showed relatively weaker explanatory power. This research presents a scalable method for mapping fine-resolution crop patterns in complex agroecosystems, providing quantitative support for regional land-use optimization and the development of agricultural policies. Full article
Show Figures

Figure 1

23 pages, 6982 KB  
Article
An Efficient and Low-Delay SFC Recovery Method in the Space–Air–Ground Integrated Aviation Information Network with Integrated UAVs
by Yong Yang, Buhong Wang, Jiwei Tian, Xiaofan Lyu and Siqi Li
Drones 2025, 9(6), 440; https://doi.org/10.3390/drones9060440 - 16 Jun 2025
Cited by 1 | Viewed by 1052
Abstract
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has [...] Read more.
Unmanned aerial vehicles (UAVs), owing to their flexible coverage expansion and dynamic adjustment capabilities, hold significant application potential across various fields. With the emergence of urban low-altitude air traffic dominated by UAVs, the integrated aviation information network combining UAVs and manned aircraft has evolved into a complex space–air–ground integrated Internet of Things (IoT) system. The application of 5G/6G network technologies, such as cloud computing, network function virtualization (NFV), and edge computing, has enhanced the flexibility of air traffic services based on service function chains (SFCs), while simultaneously expanding the network attack surface. Compared to traditional networks, the aviation information network integrating UAVs exhibits greater heterogeneity and demands higher service reliability. To address the failure issues of SFCs under attack, this study proposes an efficient SFC recovery method for recovery rate optimization (ERRRO) based on virtual network functions (VNFs) migration technology. The method first determines the recovery order of failed SFCs according to their recovery costs, prioritizing the restoration of SFCs with the lowest costs. Next, the migration priorities of the failed VNFs are ranked based on their neighborhood certainty, with the VNFs exhibiting the highest neighborhood certainty being migrated first. Finally, the destination nodes for migrating the failed VNFs are determined by comprehensively considering attributes such as the instantiated SFC paths, delay of physical platforms, and residual resources. Experiments demonstrate that the ERRRO performs well under networks with varying resource redundancy and different types of attacks. Compared to methods reported in the literature, the ERRRO achieves superior performance in terms of the SFC recovery rate and delay. Full article
(This article belongs to the Special Issue Space–Air–Ground Integrated Networks for 6G)
Show Figures

Figure 1

30 pages, 1687 KB  
Article
Network-, Cost-, and Renewable-Aware Ant Colony Optimization for Energy-Efficient Virtual Machine Placement in Cloud Datacenters
by Ali Mohammad Baydoun and Ahmed Sherif Zekri
Future Internet 2025, 17(6), 261; https://doi.org/10.3390/fi17060261 - 14 Jun 2025
Cited by 4 | Viewed by 1336
Abstract
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that [...] Read more.
Virtual machine (VM) placement in cloud datacenters is a complex multi-objective challenge involving trade-offs among energy efficiency, carbon emissions, and network performance. This paper proposes NCRA-DP-ACO (Network-, Cost-, and Renewable-Aware Ant Colony Optimization with Dynamic Power Usage Effectiveness (PUE)), a bio-inspired metaheuristic that optimizes VM placement across geographically distributed datacenters. The approach integrates real-time solar energy availability, dynamic PUE modeling, and multi-criteria decision-making to enable environmentally and cost-efficient resource allocation. The experimental results show that NCRA-DP-ACO reduces power consumption by 13.7%, carbon emissions by 6.9%, and live VM migrations by 48.2% compared to state-of-the-art methods while maintaining Service Level Agreement (SLA) compliance. These results indicate the algorithm’s potential to support more environmentally and cost-efficient cloud management across dynamic infrastructure scenarios. Full article
Show Figures

Graphical abstract

26 pages, 2942 KB  
Article
An Improved Whale Migration Algorithm for Global Optimization of Collaborative Symmetric Balanced Learning and Cloud Task Scheduling
by Honggan Lu, Shenghao Cheng and Xinsheng Zhang
Symmetry 2025, 17(6), 841; https://doi.org/10.3390/sym17060841 - 27 May 2025
Cited by 2 | Viewed by 1059
Abstract
In today’s complex and ever-changing fields of science and engineering, intelligent optimization algorithms have become a key tool. However, the complexity of the problem itself often poses a severe challenge to the performance of the algorithm. The whale migration algorithm stands out among [...] Read more.
In today’s complex and ever-changing fields of science and engineering, intelligent optimization algorithms have become a key tool. However, the complexity of the problem itself often poses a severe challenge to the performance of the algorithm. The whale migration algorithm stands out among numerous optimization algorithms with its simple and efficient implementation and has received extensive attention. However, when confronted with complex issues such as global optimization and task scheduling, it still exposes some deficiencies including low initial population symmetry (i.e., poor distribution uniformity and insufficient balance between exploration and exploitation in iterative processes). The development ability of the algorithm is relatively weak, making it difficult to conduct an effective search and optimization in the complex problem space. The task scheduling strategy is not optimized enough, which affects the application of the algorithm in actual task scheduling scenarios. To overcome these challenges, this paper proposes an improved whale migration algorithm. Based on inheriting the original advantages of the whale migration algorithm, this algorithm effectively solves the above problems by introducing a new mechanism. The CEC2021 test function set was selected, and the effectiveness of the proposed strategy was verified through point-by-point ablation experiments. The algorithm was comprehensively verified through the CEC2022 test problem set, verifying the effectiveness and robustness of the algorithm in global optimization problems. Furthermore, the proposed algorithm was tested for cloud task scheduling problems of different scales. The experimental results show that the proposed algorithm can reduce the total scheduling cost by about 9% or more. Full article
(This article belongs to the Special Issue Symmetry in Optimization Algorithms and Applications)
Show Figures

Figure 1

18 pages, 777 KB  
Article
Optimized Kuhn–Munkres with Dynamic Strategy Selection for Virtual Network Function Hot Backup Migration
by Yibo Wang and Junbin Liang
Electronics 2025, 14(7), 1328; https://doi.org/10.3390/electronics14071328 - 27 Mar 2025
Viewed by 738
Abstract
In Follow-Me Mobile Edge Cloud (FMEC) environments, Virtual Network Function (VNF) instances dynamically move in tandem with user mobility. For latency-sensitive applications, hot backups aim to reduce service downtimes during primary VNF instance failures. However, as the distance between VNF instances and their [...] Read more.
In Follow-Me Mobile Edge Cloud (FMEC) environments, Virtual Network Function (VNF) instances dynamically move in tandem with user mobility. For latency-sensitive applications, hot backups aim to reduce service downtimes during primary VNF instance failures. However, as the distance between VNF instances and their hot backups shifts due to user mobility, recovery latency can sometimes exceed user expectations, leading to certain backups being perceived as unavailable. To maintain VNF reliability, it becomes essential to either deploy additional hot backups closer to the VNF instances or migrate the deemed unavailable backups to proximity, reinstating their usability. How to effectively leverage both the VNF and its failed hot backups to ensure VNF reliability, meet users’ recovery latency demands, and minimize the overall cost of hot backup migration and redeployment is a challenging problem. To address this challenge, we propose a hybrid approach combining an optimized Kuhn–Munkres algorithm and dynamic strategy selection for cost-efficient hot backup migration. The problem is first formulated as an integer linear programming model and proven Non-deterministic Polynomial-time hard (NP-hard). To address computational complexity, we propose an optimized Kuhn–Munkres algorithm with dynamic strategy selection. The Kuhn–Munkres algorithm accelerates backup migration through network preprocessing and multi-constraint candidate filtering, while adaptively choosing between migration and redeployment via real-time cost analysis. Through extensive experiments, our hybrid migration algorithm achieves equivalent user demand satisfaction as traditional methods while reducing backup VNF (BVNF) migration costs by 15%. The proposed approach combines an optimized Kuhn–Munkres algorithm for efficient candidate selection with dynamic cost-aware strategy switching, ensuring reliable latency-sensitive service in mobile edge environments. Full article
(This article belongs to the Topic Cloud and Edge Computing for Smart Devices)
Show Figures

Figure 1

28 pages, 6007 KB  
Article
Improving the CRCC-DHR Reliability: An Entropy-Based Mimic-Defense-Resource Scheduling Algorithm
by Xinghua Wu, Mingzhe Wang, Yun Cai, Xiaolin Chang and Yong Liu
Entropy 2025, 27(2), 208; https://doi.org/10.3390/e27020208 - 16 Feb 2025
Cited by 2 | Viewed by 1077
Abstract
With more China railway business information systems migrating to the China Railway Cloud Center (CRCC), the attack surface is expanding and there are increasing security threats for the CRCC to deal with. Cyber Mimic Defense (CMD) technology, as an active defense strategy, can [...] Read more.
With more China railway business information systems migrating to the China Railway Cloud Center (CRCC), the attack surface is expanding and there are increasing security threats for the CRCC to deal with. Cyber Mimic Defense (CMD) technology, as an active defense strategy, can counter these threats by constructing a Dynamic Heterogeneous Redundancy (DHR) architecture. However, there are at least two challenges posed to the DHR deployment, namely, the limited number of available schedulable heterogeneous resources and memorization-based attacks. This paper aims to address these two challenges to improve the CRCC-DHR reliability and then facilitate the DHR deployment. By reliability, we mean that the CRCC-DHR with the limited number of available heterogeneous resources can effectively resist memorization-based attacks. We first propose three metrics for assessing the reliability of the CRCC-DHR architecture. Then, we propose an incomplete-information-based game model to capture the relationships between attackers and defenders. Finally, based on the proposed metrics and the captured relationship, we propose a redundant-heterogeneous-resources scheduling algorithm, called the Entropy Weight Scheduling Algorithm (REWS). We evaluate the capability of REWS with the three existing algorithms through simulations. The results show that REWS can achieve a better reliability than the other algorithms. In addition, REWS demonstrates a lower time complexity compared with the existing algorithms. Full article
(This article belongs to the Special Issue Entropy Method for Decision Making with Uncertainty)
Show Figures

Figure 1

21 pages, 3624 KB  
Article
From Virtual to Reality: A Deep Reinforcement Learning Solution to Implement Autonomous Driving with 3D-LiDAR
by Yuhan Chen, Chan Tong Lam, Giovanni Pau and Wei Ke
Appl. Sci. 2025, 15(3), 1423; https://doi.org/10.3390/app15031423 - 30 Jan 2025
Cited by 1 | Viewed by 3043
Abstract
Autonomous driving technology faces significant challenges in processing complex environmental data and making real-time decisions. Traditional supervised learning approaches heavily rely on extensive data labeling, which incurs substantial costs. This study presents a complete implementation framework combining Deep Deterministic Policy Gradient (DDPG) reinforcement [...] Read more.
Autonomous driving technology faces significant challenges in processing complex environmental data and making real-time decisions. Traditional supervised learning approaches heavily rely on extensive data labeling, which incurs substantial costs. This study presents a complete implementation framework combining Deep Deterministic Policy Gradient (DDPG) reinforcement learning with 3D-LiDAR perception techniques for practical application in autonomous driving. DDPG meets the continuous action space requirements of driving, and the point cloud processing module uses a traditional algorithm combined with attention mechanisms to provide high awareness of the environment. The solution is first validated in a simulation environment and then successfully migrated to a real environment based on a 1/10-scale F1tenth experimental vehicle. The experimental results show that the method proposed in this study is able to complete the autonomous driving task in the real environment, providing a feasible technical path for the engineering application of advanced sensor technology combined with complex learning algorithms in the field of autonomous driving. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Transportation Engineering)
Show Figures

Figure 1

20 pages, 2042 KB  
Article
Computing Unit and Data Migration Strategy under Limited Resources: Taking Train Operation Control System as an Example
by Jianjun Yuan, Laiping Sun, Pengzi Chu and Yi Yu
Electronics 2024, 13(21), 4328; https://doi.org/10.3390/electronics13214328 - 4 Nov 2024
Viewed by 1378
Abstract
There are conflicts between the increasingly complex operational requirements and the slow rate of system platform upgrading, especially in the industry of railway transit-signaling systems. We attempted to address this problem by establishing a model for migrating computing units and data under resource-constrained [...] Read more.
There are conflicts between the increasingly complex operational requirements and the slow rate of system platform upgrading, especially in the industry of railway transit-signaling systems. We attempted to address this problem by establishing a model for migrating computing units and data under resource-constrained conditions in this paper. By decomposing and reallocating application functions, optimizing the use of CPU, memory, and network bandwidth, a hierarchical structure of computing units is proposed. The architecture divides the system into layers and components to facilitate resource management. Then, a migration strategy is proposed, which mainly focuses on moving components and data from less critical paths to critical paths and ultimately optimizing the utilization of computing resources. Specifically, the test results suggest that the method can reduce the overall CPU utilization by 27%, memory usage by 6.8%, and network bandwidth occupation by 35%. The practical value of this study lies in providing a theoretical model and implementation method for optimizing resource allocation in scenarios where there is a gap between resource and computing requirements in fixed-resource service architectures. The strategy is compatible for distributed computing architectures and cloud/cloud–edge-computing architectures. Full article
Show Figures

Figure 1

21 pages, 724 KB  
Article
Network Traffic Prediction in an Edge–Cloud Continuum Network for Multiple Network Service Providers
by Ying Hu, Ben Liu, Jianyong Li, Liang Zhu, Jihui Han, Zengyu Cai and Jie Zhang
Electronics 2024, 13(17), 3515; https://doi.org/10.3390/electronics13173515 - 4 Sep 2024
Viewed by 1883
Abstract
Network function virtualization (NFV) allows the dynamic configuration of virtualized network functions to adapt services to complex and real-time network environments to improve network performance. The dynamic nature of physical networks creates significant challenges for virtual network function (VNF) migration and energy consumption, [...] Read more.
Network function virtualization (NFV) allows the dynamic configuration of virtualized network functions to adapt services to complex and real-time network environments to improve network performance. The dynamic nature of physical networks creates significant challenges for virtual network function (VNF) migration and energy consumption, especially in edge–cloud continuum networks. This challenge can be addressed by predicting network traffic and proactively migrating VNFs using the predicted values. However, historical network traffic data are held by network service providers, and different network service providers are reluctant to share historical data due to privacy concerns; in addition, network resource providers that own the underlying networks are unable to effectively predict network traffic. To address this challenge, we apply a federated learning (FL) framework to enable network resource providers to no longer need historical network traffic data to be able to effectively predict network traffic. Further, to enable the predicted network traffic to lead to better migration effects, such as reducing the number of migrations, decreasing energy consumption, and increasing the request acceptance rate, we apply the predicted values of the network traffic to the network environment and feed the migration results of the network environment on the multiple factors described above to the neural network model. To obtain the migration results of the network environment, we analyzed and developed mathematical models for edge–cloud continuum networks with multiple network service providers. The effectiveness of our algorithm is evaluated through extensive simulations, and the results show a significant reduction in the number of migrated nodes and energy consumption, as well as an increase in the acceptance rate of the service function chain (SFC), compared with the commonly used scheme that uses only the difference between the predicted and actual traffic to define the loss function. Full article
Show Figures

Figure 1

Back to TopTop