Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (86)

Search Parameters:
Keywords = multi-community resource scheduling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 3181 KB  
Article
Integrating Reinforcement Learning and LLM with Self-Optimization Network System
by Xing Xu, Jianbin Zhao, Yu Zhang and Rongpeng Li
Network 2025, 5(3), 39; https://doi.org/10.3390/network5030039 - 16 Sep 2025
Viewed by 871
Abstract
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM) [...] Read more.
The rapid expansion of communication networks and increasingly complex service demands have presented significant challenges to the intelligent management of network resources. To address these challenges, we have proposed a network self-optimization framework integrating the predictive capabilities of the Large Language Model (LLM) with the decision-making capabilities of multi-agent Reinforcement Learning (RL). Specifically, historical network traffic data are converted into structured inputs to forecast future traffic patterns using a GPT-2-based prediction module. Concurrently, a Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm leverages real-time sensor data—including link delay and packet loss rates collected by embedded network sensors—to dynamically optimize bandwidth allocation. This sensor-driven mechanism enables the system to perform real-time optimization of bandwidth allocation, ensuring accurate monitoring and proactive resource scheduling. We evaluate our framework in a heterogeneous network simulated using Mininet under diverse traffic scenarios. Experimental results show that the proposed method significantly reduces network latency and packet loss, as well as improves robustness and resource utilization, highlighting the effectiveness of integrating sensor-driven RL optimization with predictive insights from LLMs. Full article
Show Figures

Figure 1

19 pages, 3880 KB  
Article
Optimal Scheduling of a Multi-Energy Hub with Integrated Demand Response Programs
by Rana H. A. Zubo, Patrick S. Onen, Iqbal M Mujtaba, Geev Mokryani and Raed Abd-Alhameed
Processes 2025, 13(9), 2879; https://doi.org/10.3390/pr13092879 - 9 Sep 2025
Viewed by 532
Abstract
This paper presents an optimal scheduling framework for a multi-energy hub (EH) that integrates electricity, natural gas, wind energy, energy storage systems, and demand response (DR) programs. The EH incorporates key system components including transformers, converters, boilers, combined heat and power (CHP) units, [...] Read more.
This paper presents an optimal scheduling framework for a multi-energy hub (EH) that integrates electricity, natural gas, wind energy, energy storage systems, and demand response (DR) programs. The EH incorporates key system components including transformers, converters, boilers, combined heat and power (CHP) units, and both thermal and electrical energy storage. A novel aspect of this work is the joint coordination of multi-carrier energy flows with DR flexibility, enabling consumers to actively shift or reduce loads in response to pricing signals while leveraging storage and renewable resources. The optimisation problem is formulated as a mixed-integer linear programming (MILP) model and solved using the CPLEX solver in GAMS. To evaluate system performance, five case studies are investigated under varying natural gas price conditions and hub configurations, including scenarios with and without DR and CHP. Results demonstrate that DR participation significantly reduces total operating costs (up to 6%), enhances renewable utilisation, and decreases peak demand (by around 6%), leading to a flatter demand curve and improved system reliability. The findings highlight the potential of integrated EHs with DR as a cost-effective and flexible solution for future low-carbon energy systems. Furthermore, the study provides insights into practical deployment challenges, including storage efficiency, communication infrastructure, and real-time scheduling requirements, paving the way for hardware-in-the-loop and pilot-scale validations. Full article
Show Figures

Figure 1

28 pages, 2891 KB  
Article
Integrated Operations Scheduling and Resource Allocation at Heavy Haul Railway Port Stations: A Collaborative Dual-Agent Actor–Critic Reinforcement Learning Framework
by Yidi Wu, Shiwei He, Zeyu Long and Haozhou Tang
Systems 2025, 13(9), 762; https://doi.org/10.3390/systems13090762 - 1 Sep 2025
Viewed by 561
Abstract
To enhance the overall operational efficiency of heavy haul railway port stations, which serve as critical hubs in rail–water intermodal transportation systems, this study develops a novel scheduling optimization method that integrates operation plans and resource allocation. By analyzing the operational processes of [...] Read more.
To enhance the overall operational efficiency of heavy haul railway port stations, which serve as critical hubs in rail–water intermodal transportation systems, this study develops a novel scheduling optimization method that integrates operation plans and resource allocation. By analyzing the operational processes of heavy haul trains and shunting operation modes within a hybrid unloading system, we establish an integrated scheduling optimization model. To solve the model efficiently, a dual-agent advantage actor–critic with Pareto reward shaping (DAA2C-PRS) algorithm framework is proposed, which captures the matching relationship between operations and resources through joint actions taken by the train agent and the shunting agent to depict the scheduling decision process. Convolutional neural networks (CNNs) are employed to extract features from a multi-channel matrix containing real-time scheduling data. Considering the objective function and resource allocation with capacity, we design knowledge-based composite dispatching rules. Regarding the communication among agents, a shared experience replay buffer and Pareto reward shaping mechanism are implemented to enhance the level of strategic collaboration and learning efficiency. Based on this algorithm framework, we conduct experimental verification at H port station, and the results demonstrate that the proposed algorithm exhibits a superior solution quality and convergence performance compared with other methods for all tested instances. Full article
(This article belongs to the Special Issue Scheduling and Optimization in Production and Transportation Systems)
Show Figures

Figure 1

36 pages, 4298 KB  
Article
A Robust Collaborative Optimization of Multi-Microgrids and Shared Energy Storage in a Fraudulent Environment
by Haihong Bian and Kai Ji
Energies 2025, 18(17), 4635; https://doi.org/10.3390/en18174635 - 31 Aug 2025
Viewed by 563
Abstract
In the context of the coordinated operation of microgrids and community energy storage systems, achieving optimal resource allocation under complex and uncertain conditions has emerged as a prominent research focus. This study proposes a robust collaborative optimization model for microgrids and community energy [...] Read more.
In the context of the coordinated operation of microgrids and community energy storage systems, achieving optimal resource allocation under complex and uncertain conditions has emerged as a prominent research focus. This study proposes a robust collaborative optimization model for microgrids and community energy storage systems under a game-theoretic environment where potential fraudulent behavior is considered. A multi-energy collaborative system model is first constructed, integrating multiple uncertainties in source-load pricing, and a max-min robust optimization strategy is employed to improve scheduling resilience. Secondly, a game-theoretic model is introduced to identify and suppress manipulative behaviors by dishonest microgrids in energy transactions, based on a Nash bargaining mechanism. Finally, a distributed collaborative solution framework is developed using the Alternating Direction Method of Multipliers and Column-and-Constraint Generation to enable efficient parallel computation. Simulation results indicate that the framework reduces the alliance’s total cost from CNY 66,319.37 to CNY 57,924.89, saving CNY 8394.48. Specifically, the operational costs of MG1, MG2, and MG3 were reduced by CNY 742.60, CNY 1069.92, and CNY 1451.40, respectively, while CES achieved an additional revenue of CNY 5130.56 through peak shaving and valley filling operations. Furthermore, this distributed algorithm converges within 6–15 iterations and demonstrates high computational efficiency and robustness across various uncertain scenarios. Full article
Show Figures

Figure 1

28 pages, 3002 KB  
Article
Integrating Off-Site Modular Construction and BIM for Sustainable Multifamily Buildings: A Case Study in Rio de Janeiro
by Matheus Q. Vargas, Ana Briga-Sá, Dieter Boer, Mohammad K. Najjar and Assed N. Haddad
Sustainability 2025, 17(17), 7791; https://doi.org/10.3390/su17177791 - 29 Aug 2025
Viewed by 1121
Abstract
The construction industry faces persistent challenges, including low productivity, high waste generation, and resistance to technological innovation. Off-site modular construction, supported by Building Information Modeling (BIM), emerges as a promising strategy to address these issues and advance sustainability goals. This study aims to [...] Read more.
The construction industry faces persistent challenges, including low productivity, high waste generation, and resistance to technological innovation. Off-site modular construction, supported by Building Information Modeling (BIM), emerges as a promising strategy to address these issues and advance sustainability goals. This study aims to evaluate the practical impacts of industrialized off-site construction in the Brazilian context, focusing on cost, execution time, structural weight, and architectural–logistical constraints. The novelty lies in applying the methodology to a high standard, mixed-use multifamily building, an atypical scenario for modular construction in Brazil, and employing a MultiCriteria Decision Analysis (MCDA) to integrate results. A detailed case study is developed comparing conventional and off-site construction approaches using BIM-assisted analyses for weight reduction, cost estimates, and schedule optimization. The results show an 89% reduction in structural weight, a 6% decrease in overall costs, and a 40% reduction in project duration when adopting fully off-site solutions. The integration of results was performed through the Weighted Scoring Method (WSM), a form of MCDA chosen for its transparency and adaptability to case studies. While this study defined weights and scores, the framework allows the future incorporation of stakeholder input. Challenges identified include the need for early design integration, transport limitations, and site-specific constraints. By quantifying benefits and limitations, this study contributes to expanding the understanding of off-site modular adaptability of construction projects beyond low-cost housing, demonstrating its potential for diverse projects and advancing its implementation in emerging markets. Beyond technical and economic outcomes, the study also frames off-site modular construction within the three pillars of sustainability. Environmentally, it reduces structural weight, resource consumption, and on-site waste; economically, it improves cost efficiency and project delivery times; and socially, it offers potential benefits such as safer working conditions, reduced urban disruption, and faster provision of community-oriented buildings. These dimensions highlight its broader contribution to sustainable development in Brazil. Full article
Show Figures

Figure 1

29 pages, 5025 KB  
Article
A Two-Stage T-Norm–Choquet–OWA Resource Aggregator for Multi-UAV Cooperation: Theoretical Proof and Validation
by Linchao Zhang, Jun Peng, Lei Hang and Zhongyang Cheng
Drones 2025, 9(9), 597; https://doi.org/10.3390/drones9090597 - 25 Aug 2025
Viewed by 586
Abstract
Multi-UAV cooperative missions demand millisecond-level coordination across three key resource dimensions—battery energy, wireless bandwidth, and onboard computing power—where traditional Min or linearly weighted schedulers struggle to balance safety with efficiency. We propose a prediction-enhanced two-stage T-norm–Choquet–OWA resource aggregator. First, an LSTM-EMA model forecasts [...] Read more.
Multi-UAV cooperative missions demand millisecond-level coordination across three key resource dimensions—battery energy, wireless bandwidth, and onboard computing power—where traditional Min or linearly weighted schedulers struggle to balance safety with efficiency. We propose a prediction-enhanced two-stage T-norm–Choquet–OWA resource aggregator. First, an LSTM-EMA model forecasts resource trajectories 3 s ahead; next, a first-stage T-norm (min) pinpoints the bottleneck resource, and a second-stage Choquet–OWA, driven by an adaptive interaction measure ϕ, elastically compensates according to instantaneous power usage, achieving a “bottleneck-first, efficiency-recovery” coordination strategy. Theoretical analysis establishes monotonicity, tight bounds, bottleneck prioritization, and Lyapunov stability, with node-level complexity of only O(1). In joint simulations involving 360 UAVs, the method holds the average round-trip time (RTT) at 55 ms, cutting latency by 5%, 10%, 15%, and 20% relative to Min, DRL-PPO, single-layer OWA, and WSM, respectively. Jitter remains within 11 ms, the packet-loss rate stays below 0.03%, and residual battery increases by about 12% over the best heuristic baseline. These results confirm the low-latency, high-stability benefits of the prediction-based peak-shaving plus two-stage fuzzy aggregation approach for large-scale UAV swarms. Full article
(This article belongs to the Section Drone Communications)
Show Figures

Figure 1

14 pages, 1769 KB  
Article
Queue Stability-Constrained Deep Reinforcement Learning Algorithms for Adaptive Transmission Control in Multi-Access Edge Computing Systems
by Longzhe Han, Tian Zeng, Jia Zhao, Xuecai Bao, Guangming Liu and Yan Liu
Algorithms 2025, 18(8), 498; https://doi.org/10.3390/a18080498 - 11 Aug 2025
Viewed by 693
Abstract
To meet the escalating demands of massive data transmission, the next generation of wireless networks will leverage the multi-access edge computing (MEC) architecture coupled with multi-access transmission technologies to enhance communication resource utilization. This paper presents queue stability-constrained reinforcement learning algorithms designed to [...] Read more.
To meet the escalating demands of massive data transmission, the next generation of wireless networks will leverage the multi-access edge computing (MEC) architecture coupled with multi-access transmission technologies to enhance communication resource utilization. This paper presents queue stability-constrained reinforcement learning algorithms designed to optimize the transmission control mechanism in MEC systems to improve both throughput and reliability. We propose an analytical framework to model the queue stability. To increase transmission performance while maintaining queue stability, queueing delay model is designed to analyze the packet scheduling process by using the M/M/c queueing model and estimate the expected packet queueing delay. To handle the time-varying network environment, we introduce a queue stability constraint into the reinforcement learning reward function to jointly optimize latency and queue stability. The reinforcement learning algorithm is deployed at the MEC server to reduce the workload of central cloud servers. Simulation results validate that the proposed algorithm effectively controls queueing delay and average queue length while improving packet transmission success rates in dynamic MEC environments. Full article
(This article belongs to the Special Issue AI Algorithms for 6G Mobile Edge Computing and Network Security)
Show Figures

Figure 1

41 pages, 800 KB  
Review
Bridging Classic Operations Research and Artificial Intelligence for Network Optimization in the 6G Era: A Review
by Pablo Adasme, Ali Dehghan Firoozabadi and Enrique San Juan
Symmetry 2025, 17(8), 1279; https://doi.org/10.3390/sym17081279 - 9 Aug 2025
Viewed by 1633
Abstract
This paper comprehensively reviews how operations research and optimization procedures are applied to address challenges in wireless network communications. Key challenges such as network topology design, dynamic task scheduling, and multi-objective resource allocation are examined and systematically categorized. The revision focuses on literature [...] Read more.
This paper comprehensively reviews how operations research and optimization procedures are applied to address challenges in wireless network communications. Key challenges such as network topology design, dynamic task scheduling, and multi-objective resource allocation are examined and systematically categorized. The revision focuses on literature published between 2023 and 2025, and covers topics such as flow optimization and routing, resource allocation and scheduling, mobile and wireless network management, network resilience and robustness, and energy efficiency. The works are selected using a methodological approach ranging from the exact optimization methods, such as mixed-integer programming, to heuristic/metaheuristic strategies and machine-learning-based techniques. It is reported a comparative analysis in terms of computational efficiency, scalability, and practical applicability. The main contribution is to highlight current research gaps and open challenges, with particular emphasis on the integration of operations research and artificial intelligence, especially in problems modeled using graphs and network structures. Full article
Show Figures

Figure 1

17 pages, 3393 KB  
Article
Research on Distributed Collaborative Task Planning and Countermeasure Strategies for Satellites Based on Game Theory Driven Approach
by Huayu Gao, Junqi Wang, Xusheng Xu, Qiufan Yuan, Pei Wang and Daming Zhou
Remote Sens. 2025, 17(15), 2640; https://doi.org/10.3390/rs17152640 - 30 Jul 2025
Viewed by 563
Abstract
With the rapid advancement of space technology, satellites are playing an increasingly vital role in fields such as Earth observation, communication and navigation, space exploration, and military applications. Efficiently deploying satellite missions under multi-objective, multi-constraint, and dynamic environments has become a critical challenge [...] Read more.
With the rapid advancement of space technology, satellites are playing an increasingly vital role in fields such as Earth observation, communication and navigation, space exploration, and military applications. Efficiently deploying satellite missions under multi-objective, multi-constraint, and dynamic environments has become a critical challenge in the current aerospace domain. This paper integrates the concepts of game theory and proposes a distributed collaborative task model suitable for on-orbit satellite mission planning. A two-player impulsive maneuver game model is constructed using differential game theory. Based on the ideas of Nash equilibrium and distributed collaboration, multi-agent technology is applied to the distributed collaborative task planning, achieving collaborative allocation and countermeasure strategies for multi-objective and multi-satellite scenarios. Experimental results demonstrate that the method proposed in this paper exhibits good adaptability and robustness in multiple impulse scheduling, maneuver strategy iteration, and heterogeneous resource utilization, providing a feasible technical approach for mission planning and game confrontation in satellite clusters. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

25 pages, 760 KB  
Article
Scheduling the Exchange of Context Information for Time-Triggered Adaptive Systems
by Daniel Onwuchekwa, Omar Hekal and Roman Obermaisser
Algorithms 2025, 18(8), 456; https://doi.org/10.3390/a18080456 - 22 Jul 2025
Viewed by 439
Abstract
This paper presents a novel metascheduling algorithm to enhance communication efficiency in off-chip time-triggered multi-processor system-on-chip (MPSoC) platforms, particularly for safety-critical applications in aerospace and automotive domains. Time-triggered communication standards such as time-sensitive networking (TSN) and TTEthernet effectively enable deterministic and reliable communication [...] Read more.
This paper presents a novel metascheduling algorithm to enhance communication efficiency in off-chip time-triggered multi-processor system-on-chip (MPSoC) platforms, particularly for safety-critical applications in aerospace and automotive domains. Time-triggered communication standards such as time-sensitive networking (TSN) and TTEthernet effectively enable deterministic and reliable communication across distributed systems, including MPSoC-based platforms connected via Ethernet. However, their dependence on static resource allocation limits adaptability under dynamic operating conditions. To address this challenge, we propose an offline metascheduling framework that generates multiple precomputed schedules corresponding to different context events. The proposed algorithm introduces a selective communication strategy that synchronizes context information exchange with key decision points, thereby minimizing unnecessary communication while maintaining global consistency and system determinism. By leveraging knowledge of context event patterns, our method facilitates coordinated schedule transitions and significantly reduces communication overhead. Experimental results show that our approach outperforms conventional scheduling techniques, achieving a communication overhead reduction ranging from 9.89 to 32.98 times compared to a two-time-unit periodic sampling strategy. This work provides a practical and certifiable solution for introducing adaptability into Ethernet-based time-triggered MPSoC systems without compromising the predictability essential for safety certification. Full article
(This article belongs to the Special Issue Bio-Inspired Algorithms: 2nd Edition)
Show Figures

Figure 1

19 pages, 1942 KB  
Article
Adaptive Multi-Agent Reinforcement Learning with Graph Neural Networks for Dynamic Optimization in Sports Buildings
by Sen Chen, Xiaolong Chen, Qian Bao, Hongfeng Zhang and Cora Un In Wong
Buildings 2025, 15(14), 2554; https://doi.org/10.3390/buildings15142554 - 20 Jul 2025
Cited by 3 | Viewed by 1285
Abstract
The dynamic scheduling optimization of sports facilities faces challenges posed by real-time demand fluctuations and complex interdependencies between facilities. To address the adaptability limitations of traditional centralized approaches, this study proposes a decentralized multi-agent reinforcement learning framework based on graph neural networks (GNNs). [...] Read more.
The dynamic scheduling optimization of sports facilities faces challenges posed by real-time demand fluctuations and complex interdependencies between facilities. To address the adaptability limitations of traditional centralized approaches, this study proposes a decentralized multi-agent reinforcement learning framework based on graph neural networks (GNNs). Experimental results demonstrate that in a simulated environment comprising 12 heterogeneous sports facilities, the proposed method achieves an operational efficiency of 0.89 ± 0.02, representing a 13% improvement over Centralized PPO, while user satisfaction reaches 0.85 ± 0.03, a 9% enhancement. When confronted with a sudden 30% surge in demand, the system recovers in just 90 steps, 33% faster than centralized methods. The GNN attention mechanism successfully captures critical dependencies between facilities, such as the connection weight of 0.32 ± 0.04 between swimming pools and locker rooms. Computational efficiency tests show that the system maintains real-time decision-making capability within 800 ms even when scaled to 50 facilities. These results verify that the method effectively balances decentralized decision-making with global coordination while maintaining low communication overhead (0.09 ± 0.01), offering a scalable and practical solution for resource management in complex built environments. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

32 pages, 2917 KB  
Article
Self-Adapting CPU Scheduling for Mixed Database Workloads via Hierarchical Deep Reinforcement Learning
by Suchuan Xing, Yihan Wang and Wenhe Liu
Symmetry 2025, 17(7), 1109; https://doi.org/10.3390/sym17071109 - 10 Jul 2025
Cited by 3 | Viewed by 1107
Abstract
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database [...] Read more.
Modern database systems require autonomous CPU scheduling frameworks that dynamically optimize resource allocation across heterogeneous workloads while maintaining strict performance guarantees. We present a novel hierarchical deep reinforcement learning framework augmented with graph neural networks to address CPU scheduling challenges in mixed database environments comprising Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), vector processing, and background maintenance workloads. Our approach introduces three key innovations: first, a symmetric two-tier control architecture where a meta-controller allocates CPU budgets across workload categories using policy gradient methods while specialized sub-controllers optimize process-level resource allocation through continuous action spaces; second, graph neural network-based dependency modeling that captures complex inter-process relationships and communication patterns while preserving inherent symmetries in database architectures; and third, meta-learning integration with curiosity-driven exploration enabling rapid adaptation to previously unseen workload patterns without extensive retraining. The framework incorporates a multi-objective reward function balancing Service Level Objective (SLO) adherence, resource efficiency, symmetric fairness metrics, and system stability. Experimental evaluation through high-fidelity digital twin simulation and production deployment demonstrates substantial performance improvements: 43.5% reduction in p99 latency violations for OLTP workloads and 27.6% improvement in overall CPU utilization, with successful scaling to 10,000 concurrent processes maintaining sub-3% scheduling overhead. This work represents a significant advancement toward truly autonomous database resource management, establishing a foundation for next-generation self-optimizing database systems with implications extending to broader orchestration challenges in cloud-native architectures. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

24 pages, 14028 KB  
Article
Heuristic-Based Scheduling of BESS for Multi-Community Large-Scale Active Distribution Network
by Ejikeme A. Amako, Ali Arzani and Satish M. Mahajan
Electricity 2025, 6(3), 36; https://doi.org/10.3390/electricity6030036 - 1 Jul 2025
Cited by 3 | Viewed by 733
Abstract
The integration of battery energy storage systems (BESSs) within active distribution networks (ADNs) entails optimized day-ahead charge/discharge scheduling to achieve effective peak shaving.The primary objective is to reduce peak demand and mitigate power deviations caused by intermittent photovoltaic (PV) output. Quasi-static time-series (QSTS) [...] Read more.
The integration of battery energy storage systems (BESSs) within active distribution networks (ADNs) entails optimized day-ahead charge/discharge scheduling to achieve effective peak shaving.The primary objective is to reduce peak demand and mitigate power deviations caused by intermittent photovoltaic (PV) output. Quasi-static time-series (QSTS) co-simulations for determining optimal heuristic solutions at each time interval are computationally intensive, particularly for large-scale systems. To address this, a two-stage intelligent BESS scheduling approach implemented in a MATLAB–OpenDSS environment with parallel processing is proposed in this paper. In the first stage, a rule-based decision tree generates initial charge/discharge setpoints for community BESS units. These setpoints are refined in the second stage using an optimization algorithm aimed at minimizing community net load power deviations and reducing peak demand. By assigning each ADN community to a dedicated CPU core, the proposed approach utilizes parallel processing to significantly reduce the execution time. Performance evaluations on an IEEE 8500-node test feeder demonstrate that the approach enhances peak shaving while reducing QSTS co-simulation execution time, utility peak demand, distribution network losses, and point of interconnection (POI) nodal voltage deviations. In addition, the use of smart inverter functions improves BESS operations by mitigating voltage violations and active power curtailment, thereby increasing the amount of energy shaved during peak demand periods. Full article
Show Figures

Figure 1

21 pages, 11817 KB  
Article
The Proposal and Validation of a Distributed Real-Time Data Management Framework Based on Edge Computing with OPC Unified Architecture and Kafka
by Daixing Lu, Kun Wang, Yubo Wang and Ye Shen
Appl. Sci. 2025, 15(12), 6862; https://doi.org/10.3390/app15126862 - 18 Jun 2025
Viewed by 1326
Abstract
With the advent of Industry 4.0, the manufacturing industry is facing unprecedented data challenges. Sensors, PLCs, and various types of automation equipment in smart factories continue to generate massive amounts of heterogeneous data, but existing systems generally have bottlenecks in data collection standardization, [...] Read more.
With the advent of Industry 4.0, the manufacturing industry is facing unprecedented data challenges. Sensors, PLCs, and various types of automation equipment in smart factories continue to generate massive amounts of heterogeneous data, but existing systems generally have bottlenecks in data collection standardization, real-time processing capabilities, and system scalability, which make it difficult to meet the needs of efficient collaboration and dynamic decision making. This study proposes a multi-level industrial data processing framework based on edge computing that aims to improve the response speed and processing ability of manufacturing sites to data and to realize real-time decision making and lean management of intelligent manufacturing. At the edge layer, the OPC UA (OPC Unified Architecture) protocol is used to realize the standardized collection of heterogeneous equipment data, and a lightweight edge-computing algorithm is designed to complete the analysis and processing of data so as to realize a visualization of the manufacturing process and the inventory in a production workshop. In the storage layer, Apache Kafka is used to implement efficient data stream processing and improve the throughput and scalability of the system. The test results show that compared with the traditional workshop, the framework has excellent performance in improving the system throughput capacity and real-time response speed, can effectively support production process judgment and status analysis on the edge side, and can realize the real-time monitoring and management of the entire manufacturing workshop. This research provides a practical solution for the industrial data management system, not only helping enterprises improve the transparency level of manufacturing sites and the efficiency of resource scheduling but also providing a practical basis for further research on industrial data processing under the “edge-cloud collaboration” architecture in the academic community. Full article
(This article belongs to the Section Applied Industrial Technologies)
Show Figures

Figure 1

44 pages, 4373 KB  
Review
Recent Advances in Multi-Agent Reinforcement Learning for Intelligent Automation and Control of Water Environment Systems
by Lei Jia and Yan Pei
Machines 2025, 13(6), 503; https://doi.org/10.3390/machines13060503 - 9 Jun 2025
Cited by 3 | Viewed by 6770
Abstract
Multi-agent reinforcement learning (MARL) has demonstrated significant application potential in addressing cooperative control, policy optimization, and task allocation problems in complex systems. This paper focuses on its applications and development in water environmental systems, providing a systematic review of the theoretical foundations of [...] Read more.
Multi-agent reinforcement learning (MARL) has demonstrated significant application potential in addressing cooperative control, policy optimization, and task allocation problems in complex systems. This paper focuses on its applications and development in water environmental systems, providing a systematic review of the theoretical foundations of multi-agent systems and reinforcement learning and summarizing three representative categories of mainstream MARL algorithms. Typical control scenarios in water systems are also examined. From the perspective of cooperative control, this paper investigates the modeling mechanisms and policy coordination strategies of MARL in key tasks such as water supply scheduling, hydro-energy co-regulation, and autonomous monitoring. It further analyzes the challenges and solutions for improving global cooperative efficiency under practical constraints such as limited resources, system heterogeneity, and unstable communication. Additionally, recent progress in cross-domain generalization, integrated communication–perception frameworks, and system-level robustness enhancement is summarized. This work aims to provide a theoretical foundation and key insights for advancing research and practical applications of MARL-based intelligent control in water infrastructure systems. Full article
(This article belongs to the Special Issue Recent Developments in Machine Design, Automation and Robotics)
Show Figures

Figure 1

Back to TopTop