Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (166)

Search Parameters:
Keywords = collaborative topology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 8450 KiB  
Article
Spatio-Temporal Collaborative Perception-Enabled Fault Feature Graph Construction and Topology Mining for Variable Operating Conditions Diagnosis
by Jiaxin Zhao, Xing Wu, Chang Liu and Feifei He
Sensors 2025, 25(15), 4664; https://doi.org/10.3390/s25154664 - 28 Jul 2025
Viewed by 163
Abstract
Industrial equipment fault diagnosis faces dual challenges: significant data distribution discrepancies caused by diverse operating conditions impair generalization capabilities, while underutilized spatio-temporal information from multi-source data hinders feature extraction. To address this, we propose a spatio-temporal collaborative perception-driven feature graph construction and topology [...] Read more.
Industrial equipment fault diagnosis faces dual challenges: significant data distribution discrepancies caused by diverse operating conditions impair generalization capabilities, while underutilized spatio-temporal information from multi-source data hinders feature extraction. To address this, we propose a spatio-temporal collaborative perception-driven feature graph construction and topology mining methodology for variable-condition diagnosis. First, leveraging the operational condition invariance and cross-condition consistency of fault features, we construct fault feature graphs using single-source data and similarity clustering, validating topological similarity and representational consistency under varying conditions. Second, we reveal spatio-temporal correlations within multi-source feature topologies. By embedding multi-source spatio-temporal information into fault feature graphs via spatio-temporal collaborative perception, we establish high-dimensional spatio-temporal feature topology graphs based on spectral similarity, extending generalized feature representations into the spatio-temporal domain. Finally, we develop a graph residual convolutional network to mine topological information from multi-source spatio-temporal features under complex operating conditions. Experiments on variable/multi-condition datasets demonstrate the following: feature graphs seamlessly integrate multi-source information with operational variations; the methodology precisely captures spatio-temporal delays induced by vibrational direction/path discrepancies; and the proposed model maintains both high diagnostic accuracy and strong generalization capacity under complex operating conditions, delivering a highly reliable framework for rotating machinery fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

22 pages, 4670 KiB  
Article
Integrated Carbon Flow Tracing and Topology Reconfiguration for Low-Carbon Optimal Dispatch in DG-Embedded Distribution Networks
by Rao Fu, Guofeng Xia, Sining Hu, Yuhao Zhang, Handaoyuan Li and Jiachuan Shi
Mathematics 2025, 13(15), 2395; https://doi.org/10.3390/math13152395 - 25 Jul 2025
Viewed by 213
Abstract
Addressing the imperative for energy transition amid depleting fossil fuels, distributed generation (DG) is increasingly integrated into distribution networks (DNs). This integration necessitates low-carbon dispatching solutions that reconcile economic and environmental objectives. To bridge the gap between conventional “electricity perspective” optimization and emerging [...] Read more.
Addressing the imperative for energy transition amid depleting fossil fuels, distributed generation (DG) is increasingly integrated into distribution networks (DNs). This integration necessitates low-carbon dispatching solutions that reconcile economic and environmental objectives. To bridge the gap between conventional “electricity perspective” optimization and emerging “carbon perspective” requirements, this research integrated Carbon Emission Flow (CEF) theory to analyze spatiotemporal carbon flow characteristics within DN. Recognizing the limitations of the single-objective approach in balancing multifaceted demands, a multi-objective optimization model was formulated. This model could capture the spatiotemporal dynamics of nodal carbon intensity for low-carbon dispatching while comprehensively incorporating diverse operational economic costs to achieve collaborative low-carbon and economic dispatch in DG-embedded DN. To efficiently solve this complex constrained model, a novel Q-learning enhanced Moth Flame Optimization (QMFO) algorithm was proposed. QMFO synergized the global search capability of the Moth Flame Optimization (MFO) algorithm with the adaptive decision-making of Q-learning, embedding an adaptive exploration strategy to significantly enhance solution efficiency and accuracy for multi-objective problems. Validated on a 16-node three-feeder system, the method co-optimizes switch configurations and DG outputs, achieving dual objectives of loss reduction and carbon emission mitigation while preserving radial topology feasibility. Full article
(This article belongs to the Special Issue Mathematical and Computational Methods for Mechanics and Engineering)
Show Figures

Figure 1

28 pages, 4562 KiB  
Article
A Capacity-Constrained Weighted Clustering Algorithm for UAV Self-Organizing Networks Under Interference
by Siqi Li, Peng Gong, Weidong Wang, Jinyue Liu, Zhixuan Feng and Xiang Gao
Drones 2025, 9(8), 527; https://doi.org/10.3390/drones9080527 - 25 Jul 2025
Viewed by 163
Abstract
Compared to traditional ad hoc networks, self-organizing networks of unmanned aerial vehicle (UAV) are characterized by high node mobility, vulnerability to interference, wide distribution range, and large network scale, which make network management and routing protocol operation more challenging. Cluster structures can be [...] Read more.
Compared to traditional ad hoc networks, self-organizing networks of unmanned aerial vehicle (UAV) are characterized by high node mobility, vulnerability to interference, wide distribution range, and large network scale, which make network management and routing protocol operation more challenging. Cluster structures can be used to optimize network management and mitigate the impact of local topology changes on the entire network during collaborative task execution. To address the issue of cluster structure instability caused by the high mobility and vulnerability to interference in UAV networks, we propose a capacity-constrained weighted clustering algorithm for UAV self-organizing networks under interference. Specifically, a capacity-constrained partitioning algorithm based on K-means++ is developed to establish the initial node partitions. Then, a weighted cluster head (CH) and backup cluster head (BCH) selection algorithm is proposed, incorporating interference factors into the selection process. Additionally, a dynamic maintenance mechanism for the clustering network is introduced to enhance the stability and robustness of the network. Simulation results show that the algorithm achieves efficient node clustering under interference conditions, improving cluster load balancing, average cluster head maintenance time, and cluster head failure reconstruction time. Furthermore, the method demonstrates fast recovery capabilities in the event of node failures, making it more suitable for deployment in complex emergency rescue environments. Full article
(This article belongs to the Special Issue Unmanned Aerial Vehicles for Enhanced Emergency Response)
Show Figures

Figure 1

23 pages, 1885 KiB  
Article
Applying Machine Learning to DEEC Protocol: Improved Cluster Formation in Wireless Sensor Networks
by Abdulla Juwaied and Lidia Jackowska-Strumillo
Network 2025, 5(3), 26; https://doi.org/10.3390/network5030026 - 24 Jul 2025
Viewed by 151
Abstract
Wireless Sensor Networks (WSNs) are specialised ad hoc networks composed of small, low-power, and often battery-operated sensor nodes with various sensors and wireless communication capabilities. These nodes collaborate to monitor and collect data from the physical environment, transmitting it to a central location [...] Read more.
Wireless Sensor Networks (WSNs) are specialised ad hoc networks composed of small, low-power, and often battery-operated sensor nodes with various sensors and wireless communication capabilities. These nodes collaborate to monitor and collect data from the physical environment, transmitting it to a central location or sink node for further processing and analysis. This study proposes two machine learning-based enhancements to the DEEC protocol for Wireless Sensor Networks (WSNs) by integrating the K-Nearest Neighbours (K-NN) and K-Means (K-M) machine learning (ML) algorithms. The Distributed Energy-Efficient Clustering with K-NN (DEEC-KNN) and with K-Means (DEEC-KM) approaches dynamically optimize cluster head selection to improve energy efficiency and network lifetime. These methods are validated through extensive simulations, demonstrating up to 110% improvement in packet delivery and significant gains in network stability compared with the original DEEC protocol. The adaptive clustering enabled by K-NN and K-Means is particularly effective for large-scale and dynamic WSN deployments where node failures and topology changes are frequent. These findings suggest that integrating ML with clustering protocols is a promising direction for future WSN design. Full article
Show Figures

Figure 1

15 pages, 2538 KiB  
Article
Parallel Eclipse-Aware Routing on FPGA for SpaceWire-Based OBC in LEO Satellite Networks
by Jin Hyung Park, Heoncheol Lee and Myonghun Han
J. Sens. Actuator Netw. 2025, 14(4), 73; https://doi.org/10.3390/jsan14040073 - 15 Jul 2025
Viewed by 317
Abstract
Low Earth orbit (LEO) satellite networks deliver superior real-time performance and responsiveness compared to conventional satellite networks, despite technical and economic challenges such as high deployment costs and operational complexity. Nevertheless, rapid topology changes and severe energy constraints of LEO satellites make real-time [...] Read more.
Low Earth orbit (LEO) satellite networks deliver superior real-time performance and responsiveness compared to conventional satellite networks, despite technical and economic challenges such as high deployment costs and operational complexity. Nevertheless, rapid topology changes and severe energy constraints of LEO satellites make real-time routing a persistent challenge. In this paper, we employ field-programmable gate arrays (FPGAs) to overcome the resource limitations of on-board computers (OBCs) and to manage energy consumption effectively using the Eclipse-Aware Routing (EAR) algorithm, and we implement the K-Shortest Paths (KSP) algorithm directly on the FPGA. Our method first generates multiple routes from the source to the destination using KSP, then selects the optimal path based on energy consumption rate, eclipse duration, and estimated transmission load as evaluated by EAR. In large-scale LEO networks, the computational burden of KSP grows substantially as connectivity data become more voluminous and complex. To enhance performance, we accelerate complex computations in the programmable logic (PL) via pipelining and design a collaborative architecture between the processing system (PS) and PL, achieving approximately a 3.83× speedup compared to a PS-only implementation. We validate the feasibility of the proposed approach by successfully performing remote routing-table updates on the SpaceWire-based SpaceWire Brick MK4 network system. Full article
(This article belongs to the Section Communications and Networking)
Show Figures

Figure 1

28 pages, 4054 KiB  
Article
A Core Ontology for Whole Life Costing in Construction Projects
by Adam Yousfi, Érik Andrew Poirier and Daniel Forgues
Buildings 2025, 15(14), 2381; https://doi.org/10.3390/buildings15142381 - 8 Jul 2025
Viewed by 369
Abstract
Construction projects still face persistent barriers to adopting whole life costing (WLC), such as fragmented data, a lack of standardization, and inadequate tools. This study addresses these limitations by proposing a core ontology for WLC, developed using an ontology design science research methodology. [...] Read more.
Construction projects still face persistent barriers to adopting whole life costing (WLC), such as fragmented data, a lack of standardization, and inadequate tools. This study addresses these limitations by proposing a core ontology for WLC, developed using an ontology design science research methodology. The ontology formalizes WLC knowledge based on ISO 15686-5 and incorporates professional insights from surveys and expert focus groups. Implemented in web ontology language (OWL), it models cost categories, temporal aspects, and discounting logic in a machine-interpretable format. The ontology’s interoperability and extensibility are validated through its integration with the building topology ontology (BOT). Results show that the ontology effectively supports cost breakdown, time-based projections, and calculation of discounted values, offering a reusable structure for different project contexts. Practical validation was conducted using SQWRL queries and Python scripts for cost computation. The solution enables structured data integration and can support decision-making throughout the building life cycle. This work lays the foundation for future semantic web applications such as knowledge graphs, bridging the current technological gap and facilitating more informed and collaborative use of WLC in construction. Full article
(This article belongs to the Special Issue Emerging Technologies and Workflows for BIM and Digital Construction)
Show Figures

Figure 1

15 pages, 1529 KiB  
Article
Peak Age of Information Optimization in Cell-Free Massive Random Access Networks
by Zhiru Zhao, Yuankang Huang and Wen Zhan
Electronics 2025, 14(13), 2714; https://doi.org/10.3390/electronics14132714 - 4 Jul 2025
Viewed by 287
Abstract
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, [...] Read more.
With the vigorous development of Internet of Things technologies, Cell-Free Radio Access Network (CF-RAN), leveraging its distributed coverage and single/multi-antenna Access Point (AP) coordination advantages, has become a key technology for supporting massive Machine-Type Communication (mMTC). However, under the grant-free random access mechanism, this network architecture faces the problem of information freshness degradation due to channel congestion. To address this issue, a joint decoding model based on logical grouping architecture is introduced to analyze the correlation between the successful packet transmission probability and the Peak Age of Information (PAoI) in both single-AP and multi-AP scenarios. On this basis, a global Particle Swarm Optimization (PSO) algorithm is designed to dynamically adjust the channel access probability to minimize the average PAoI across the network. To reduce signaling overhead, a PSO algorithm based on local topology information is further proposed to achieve collaborative optimization among neighboring APs. Simulation results demonstrate that the global PSO algorithm can achieve performance closely approximating the optimum, while the local PSO algorithm maintains similar performance without the need for global information. It is especially suitable for large-scale access scenarios with wide area coverage, providing an efficient solution for optimizing information freshness in CF-RAN. Full article
Show Figures

Figure 1

15 pages, 2136 KiB  
Article
POSA-GO: Fusion of Hierarchical Gene Ontology and Protein Language Models for Protein Function Prediction
by Yubao Liu, Benrui Wang, Bocheng Yan, Haiyue Jiang and Yinfei Dai
Int. J. Mol. Sci. 2025, 26(13), 6362; https://doi.org/10.3390/ijms26136362 - 1 Jul 2025
Viewed by 307
Abstract
Protein function prediction plays a crucial role in uncovering the molecular mechanisms underlying life processes in the post-genomic era. However, with the widespread adoption of high-throughput sequencing technologies, the pace of protein function annotation significantly lags behind that of sequence discovery, highlighting the [...] Read more.
Protein function prediction plays a crucial role in uncovering the molecular mechanisms underlying life processes in the post-genomic era. However, with the widespread adoption of high-throughput sequencing technologies, the pace of protein function annotation significantly lags behind that of sequence discovery, highlighting the urgent need for more efficient and reliable predictive methods. To address the problem of existing methods ignoring the hierarchical structure of gene ontology terms and making it challenging to dynamically associate protein features with functional contexts, we propose a novel protein function prediction framework, termed Partial Order-Based Self-Attention for Gene Ontology (POSA-GO). This cross-modal collaborative modelling approach fuses GO terms with protein sequences. The model leverages the pre-trained language model ESM-2 to extract deep semantic features from protein sequences. Meanwhile, it transforms the partial order relationships among Gene Ontology (GO) terms into topological embeddings to capture their biological hierarchical dependencies. Furthermore, a multi-head self-attention mechanism is employed to dynamically model the association weights between proteins and GO terms, thereby enabling context-aware functional annotation. Comparative experiments on the CAFA3 and SwissProt datasets demonstrate that POSA-GO outperforms existing state-of-the-art methods in terms of Fmax and AUPR metrics, offering a promising solution for protein functional studies. Full article
Show Figures

Figure 1

19 pages, 3888 KiB  
Article
Swin-GAT Fusion Dual-Stream Hybrid Network for High-Resolution Remote Sensing Road Extraction
by Hongkai Zhang, Hongxuan Yuan, Minghao Shao, Junxin Wang and Suhong Liu
Remote Sens. 2025, 17(13), 2238; https://doi.org/10.3390/rs17132238 - 29 Jun 2025
Viewed by 457
Abstract
This paper introduces a novel dual-stream collaborative architecture for remote sensing road segmentation, designed to overcome multi-scale feature conflicts, limited dynamic adaptability, and compromised topological integrity. Our network employs a parallel “local–global” encoding scheme: the local stream uses depth-wise separable convolutions to capture [...] Read more.
This paper introduces a novel dual-stream collaborative architecture for remote sensing road segmentation, designed to overcome multi-scale feature conflicts, limited dynamic adaptability, and compromised topological integrity. Our network employs a parallel “local–global” encoding scheme: the local stream uses depth-wise separable convolutions to capture fine-grained details, while the global stream integrates a Swin-Transformer with a graph-attention module (Swin-GAT) to model long-range contextual and topological relationships. By decoupling detailed feature extraction from global context modeling, the proposed framework more faithfully represents complex road structures. Comprehensive experiments on multiple aerial datasets demonstrate that our approach outperforms conventional baselines—especially under shadow occlusion and for thin-road delineation—while achieving real-time inference at 31 FPS. Ablation studies further confirm the critical roles of the Swin Transformer and GAT components in preserving topological continuity. Overall, this dual-stream dynamic-fusion network sets a new benchmark for remote sensing road extraction and holds promise for real-world, real-time applications. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

20 pages, 2579 KiB  
Article
ERA-MADDPG: An Elastic Routing Algorithm Based on Multi-Agent Deep Deterministic Policy Gradient in SDN
by Wanwei Huang, Hongchang Liu, Yingying Li and Linlin Ma
Future Internet 2025, 17(7), 291; https://doi.org/10.3390/fi17070291 - 29 Jun 2025
Viewed by 325
Abstract
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic [...] Read more.
To address the fact that changes in network topology can have an impact on the performance of routing, this paper proposes an Elastic Routing Algorithm based on Multi-Agent Deep Deterministic Policy Gradient (ERA-MADDPG), which is implemented within the framework of Multi-Agent Deep Deterministic Policy Gradient (MADDPG) in deep reinforcement learning. The algorithm first builds a three-layer architecture based on Software-Defined Networking (SDN). The top-down layers are the multi-agent layer, the controller layer, and the data layer. The architecture’s processing flow, including real-time data layer information collection and dynamic policy generation, enables the ERA-MADDPG algorithm to exhibit strong elasticity by quickly adjusting routing decisions in response to topology changes. The actor-critic framework combined with Convolutional Neural Networks (CNN) to implement the ERA-MADDPG routing algorithm effectively improves training efficiency, enhances learning stability, facilitates collaboration, and improves algorithm generalization and applicability. Finally, simulation experiments demonstrate that the convergence speed of the ERA-MADDPG routing algorithm outperforms that of the Multi-Agent Deep Q-Network (MADQN) algorithm and the Smart Routing based on Deep Reinforcement Learning (SR-DRL) algorithm, and the training speed in the initial phase is improved by approximately 20.9% and 39.1% compared to the MADQN algorithm and SR-DRL algorithm, respectively. The elasticity performance of ERA-MADDPG is quantified by re-convergence speed: under 5–15% topology node/link changes, its re-convergence speed is over 25% faster than that of MADQN and SR-DRL, demonstrating superior capability to maintain routing efficiency in dynamic environments. Full article
Show Figures

Figure 1

20 pages, 690 KiB  
Article
Using Graph-Enhanced Deep Reinforcement Learning for Distribution Network Fault Recovery
by Yueran Liu, Peng Liao and Yang Wang
Machines 2025, 13(7), 543; https://doi.org/10.3390/machines13070543 - 23 Jun 2025
Viewed by 408
Abstract
Fault recovery in distribution networks is a complex, high-dimensional decision-making task characterized by partial observability, dynamic topology, and strong interdependencies among components. To address these challenges, this paper proposes a graph-based multi-agent deep reinforcement learning (DRL) framework for intelligent fault restoration in power [...] Read more.
Fault recovery in distribution networks is a complex, high-dimensional decision-making task characterized by partial observability, dynamic topology, and strong interdependencies among components. To address these challenges, this paper proposes a graph-based multi-agent deep reinforcement learning (DRL) framework for intelligent fault restoration in power distribution networks. The restoration problem is modeled as a partially observable Markov decision process (POMDP), where each agent employs graph neural networks to extract topological features and enhance environmental perception. To address the high-dimensionality of the action space, an action decomposition strategy is introduced, treating each switch operation as an independent binary classification task, which improves convergence and decision efficiency. Furthermore, a collaborative reward mechanism is designed to promote coordination among agents and optimize global restoration performance. Experiments on the PG&E 69-bus system demonstrate that the proposed method significantly outperforms existing DRL baselines. Specifically, it achieves up to 2.6% higher load recovery, up to 0.0 p.u. lower recovery cost, and full restoration in the midday scenario, with statistically significant improvements (p<0.05 or p<0.01). These results highlight the effectiveness of graph-based learning and cooperative rewards in improving the resilience, efficiency, and adaptability of distribution network operations under varying conditions. Full article
(This article belongs to the Section Machines Testing and Maintenance)
Show Figures

Figure 1

34 pages, 9431 KiB  
Article
Gait Recognition via Enhanced Visual–Audio Ensemble Learning with Decision Support Methods
by Ruixiang Kan, Mei Wang, Tian Luo and Hongbing Qiu
Sensors 2025, 25(12), 3794; https://doi.org/10.3390/s25123794 - 18 Jun 2025
Viewed by 430
Abstract
Gait is considered a valuable biometric feature, and it is essential for uncovering the latent information embedded within gait patterns. Gait recognition methods are expected to serve as significant components in numerous applications. However, existing gait recognition methods exhibit limitations in complex scenarios. [...] Read more.
Gait is considered a valuable biometric feature, and it is essential for uncovering the latent information embedded within gait patterns. Gait recognition methods are expected to serve as significant components in numerous applications. However, existing gait recognition methods exhibit limitations in complex scenarios. To address these, we construct a dual-Kinect V2 system that focuses more on gait skeleton joint data and related acoustic signals. This setup lays a solid foundation for subsequent methods and updating strategies. The core framework consists of enhanced ensemble learning methods and Dempster–Shafer Evidence Theory (D-SET). Our recognition methods serve as the foundation, and the decision support mechanism is used to evaluate the compatibility of various modules within our system. On this basis, our main contributions are as follows: (1) an improved gait skeleton joint AdaBoost recognition method based on Circle Chaotic Mapping and Gramian Angular Field (GAF) representations; (2) a data-adaptive gait-related acoustic signal AdaBoost recognition method based on GAF and a Parallel Convolutional Neural Network (PCNN); and (3) an amalgamation of the Triangulation Topology Aggregation Optimizer (TTAO) and D-SET, providing a robust and innovative decision support mechanism. These collaborations improve the overall recognition accuracy and demonstrate their considerable application values. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 3558 KiB  
Article
Research on High-Reliability Energy-Aware Scheduling Strategy for Heterogeneous Distributed Systems
by Ziyu Chen, Jing Wu, Lin Cheng and Tao Tao
Big Data Cogn. Comput. 2025, 9(6), 160; https://doi.org/10.3390/bdcc9060160 - 17 Jun 2025
Viewed by 504
Abstract
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing [...] Read more.
With the demand for workflow processing driven by edge computing in the Internet of Things (IoT) and cloud computing growing at an exponential rate, task scheduling in heterogeneous distributed systems has become a key challenge to meet real-time constraints in resource-constrained environments. Existing studies now attempt to achieve the best balance in terms of time constraints, energy efficiency, and system reliability in Dynamic Voltage and Frequency Scaling environments. This study proposes a two-stage collaborative optimization strategy. With the help of an innovative algorithm design and theoretical analysis, the multi-objective optimization challenges mentioned above are systematically solved. First, based on a reliability-constrained model, we propose a topology-aware dynamic priority scheduling algorithm (EAWRS). This algorithm constructs a node priority function by incorporating in-degree/out-degree weighting factors and critical path analysis to enable multi-objective optimization. Second, to address the time-varying reliability characteristics introduced by DVFS, we propose a Fibonacci search-based dynamic frequency scaling algorithm (SEFFA). This algorithm effectively reduces energy consumption while ensuring task reliability, achieving sub-optimal processor energy adjustment. The collaborative mechanism of EAWRS and SEFFA has well solved the dynamic scheduling challenge based on DAG in heterogeneous multi-core processor systems in the Internet of Things environment. Experimental evaluations conducted at various scales show that, compared with the three most advanced scheduling algorithms, the proposed strategy reduces energy consumption by an average of 14.56% (up to 58.44% under high-reliability constraints) and shortens the makespan by 2.58–56.44% while strictly meeting reliability requirements. Full article
Show Figures

Figure 1

28 pages, 1509 KiB  
Article
Adaptive Congestion Detection and Traffic Control in Software-Defined Networks via Data-Driven Multi-Agent Reinforcement Learning
by Kaoutar Boussaoud, Abdeslam En-Nouaary and Meryeme Ayache
Computers 2025, 14(6), 236; https://doi.org/10.3390/computers14060236 - 16 Jun 2025
Viewed by 519
Abstract
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven [...] Read more.
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven framework based on Multi-Agent Reinforcement Learning (MARL) to enable intelligent, adaptive congestion control in SDNs. The framework integrates two collaborative agents: a Congestion Classification Agent that identifies congestion levels using metrics such as delay and packet loss, and a Decision-Making Agent based on Deep Q-Learning (DQN or its variants), which selects the optimal actions for routing and bandwidth management. The agents are trained offline using both synthetic and real network traces (e.g., the MAWI dataset), and deployed in a simulated SDN testbed using Mininet and the Ryu controller. Extensive experiments demonstrate the superiority of the proposed system across key performance metrics. Compared to baseline controllers, including standalone DQN and static heuristics, the MARL system achieves up to 3.0% higher throughput, maintains end-to-end delay below 10 ms, and reduces packet loss by over 10% in real traffic scenarios. Furthermore, the architecture exhibits stable cumulative reward progression and balanced action selection, reflecting effective learning and policy convergence. These results validate the benefit of agent specialization and modular learning in scalable and intelligent SDN traffic engineering. Full article
Show Figures

Figure 1

21 pages, 3373 KiB  
Article
Research on Intelligent Hierarchical Energy Management for Connected Automated Range-Extended Electric Vehicles Based on Speed Prediction
by Xixu Lai, Hanwu Liu, Yulong Lei, Wencai Sun, Song Wang, Jinmiao Xiang and Ziyu Wang
Energies 2025, 18(12), 3053; https://doi.org/10.3390/en18123053 - 9 Jun 2025
Viewed by 360
Abstract
To address energy management challenges for intelligent connected automated range-extended electric vehicles under vehicle-road cooperative environments, a hierarchical energy management strategy (EMS) based on speed prediction is proposed from the perspective of multi-objective optimization (MOO), with comprehensive system performance being significantly enhanced. Focusing [...] Read more.
To address energy management challenges for intelligent connected automated range-extended electric vehicles under vehicle-road cooperative environments, a hierarchical energy management strategy (EMS) based on speed prediction is proposed from the perspective of multi-objective optimization (MOO), with comprehensive system performance being significantly enhanced. Focusing on connected car-following scenarios, acceleration sequence prediction is performed based on Kalman filtering and preceding vehicle acceleration. A dual-layer optimization strategy is subsequently developed: in the upper layer, optimal speed curves are planned based on road network topology and preceding vehicle trajectories, while in the lower layer, coordinated multi-power source allocation is achieved through EMSMPC-P, a Bayesian-optimized model predictive EMS based on Pontryagin’ s minimum principle (PMP). A MOO model is ultimately formulated to enhance comprehensive system performance. Simulation and bench test results demonstrate that with SoC0 = 0.4, 7.69% and 5.13% improvement in fuel economy is achieved by EMSMPC-P compared to the charge depleting-charge sustaining (CD-CS) method and the charge depleting-blend (CD-Blend) method. Travel time reductions of 62.2% and 58.7% are observed versus CD-CS and CD-Blend. Battery lifespan degradation is mitigated by 16.18% and 5.89% relative to CD-CS and CD-Blend, demonstrating the method’s marked advantages in improving traffic efficiency, safety, battery life maintenance, and fuel economy. This study not only establishes a technical paradigm with theoretical depth and engineering applicability for EMS, but also quantitatively reveals intrinsic mechanisms underlying long-term prediction accuracy enhancement through data analysis, providing critical guidance for future vehicle–road–cloud collaborative system development. Full article
Show Figures

Figure 1

Back to TopTop