Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (133)

Search Parameters:
Keywords = gate scheduling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1734 KiB  
Article
A Multimodal Affective Interaction Architecture Integrating BERT-Based Semantic Understanding and VITS-Based Emotional Speech Synthesis
by Yanhong Yuan, Shuangsheng Duo, Xuming Tong and Yapeng Wang
Algorithms 2025, 18(8), 513; https://doi.org/10.3390/a18080513 - 14 Aug 2025
Abstract
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the [...] Read more.
Addressing the issues of coarse emotional representation, low cross-modal alignment efficiency, and insufficient real-time response capabilities in current human–computer emotional language interaction, this paper proposes an affective interaction framework integrating BERT-based semantic understanding with VITS-based speech synthesis. The framework aims to enhance the naturalness, expressiveness, and response efficiency of human–computer emotional interaction. By introducing a modular layered design, a six-dimensional emotional space, a gated attention mechanism, and a dynamic model scheduling strategy, the system overcomes challenges such as limited emotional representation, modality misalignment, and high-latency responses. Experimental results demonstrate that the framework achieves superior performance in speech synthesis quality (MOS: 4.35), emotion recognition accuracy (91.6%), and response latency (<1.2 s), outperforming baseline models like Tacotron2 and FastSpeech2. Through model lightweighting, GPU parallel inference, and load balancing optimization, the system validates its robustness and generalizability across English and Chinese corpora in cross-linguistic tests. The modular architecture and dynamic scheduling ensure scalability and efficiency, enabling a more humanized and immersive interaction experience in typical application scenarios such as psychological companionship, intelligent education, and high-concurrency customer service. This study provides an effective technical pathway for developing the next generation of personalized and immersive affective intelligent interaction systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

32 pages, 2110 KiB  
Article
Self-Attention Mechanisms in HPC Job Scheduling: A Novel Framework Combining Gated Transformers and Enhanced PPO
by Xu Gao, Hang Dong, Lianji Zhang, Yibo Wang, Xianliang Yang and Zhenyu Li
Appl. Sci. 2025, 15(16), 8928; https://doi.org/10.3390/app15168928 - 13 Aug 2025
Viewed by 108
Abstract
In HPC systems, job scheduling plays a critical role in determining resource allocation and task execution order. With the continuous expansion of computing scale and increasing system complexity, modern HPC scheduling faces two major challenges: a massive decision space consisting of tens of [...] Read more.
In HPC systems, job scheduling plays a critical role in determining resource allocation and task execution order. With the continuous expansion of computing scale and increasing system complexity, modern HPC scheduling faces two major challenges: a massive decision space consisting of tens of thousands of computing nodes and a huge job queue, as well as complex temporal dependencies between jobs and dynamically changing resource states.Traditional heuristic algorithms and basic reinforcement learning methods often struggle to effectively address these challenges in dynamic HPC environments. This study proposes a novel scheduling framework that combines GTrXL with PPO, achieving significant performance improvements through multiple technical innovations. The framework leverages the sequence modeling capabilities of the Transformer architecture and selectively filters relevant historical scheduling information through a dual-gate mechanism, improving long sequence modeling efficiency compared to standard Transformers. The proposed SECT module further enhances resource awareness through dynamic feature recalibration, achieving improved system utilization compared to similar attention mechanisms. Experimental results on multiple datasets (ANL-Intrepid, Alibaba, SDSC-SP2) demonstrate that the proposed components achieve significant performance improvements over baseline PPO implementations. Comprehensive evaluations on synthetic workloads and real HPC trace data show improvements in resource utilization and waiting time, particularly under high-load conditions, while maintaining good robustness across various cluster configurations. Full article
Show Figures

Figure 1

31 pages, 2529 KiB  
Article
Improving the Heat Transfer Efficiency of Economizers: A Comprehensive Strategy Based on Machine Learning and Quantile Ideas
by Nan Wang, Yuanhao Shi, Fangshu Cui, Jie Wen, Jianfang Jia and Bohui Wang
Energies 2025, 18(16), 4227; https://doi.org/10.3390/en18164227 - 8 Aug 2025
Viewed by 201
Abstract
Ash deposition on economizer heating surfaces degrades convective heat transfer efficiency and compromises boiler operational stability in coal-fired power plants. Conventional time-scheduled soot blowing strategies partially mitigate this issue but often cause excessive steam/energy consumption, conflicting with enterprise cost-saving and efficiency-enhancement goals. This [...] Read more.
Ash deposition on economizer heating surfaces degrades convective heat transfer efficiency and compromises boiler operational stability in coal-fired power plants. Conventional time-scheduled soot blowing strategies partially mitigate this issue but often cause excessive steam/energy consumption, conflicting with enterprise cost-saving and efficiency-enhancement goals. This study introduces an integrated framework combining real-time ash monitoring, dynamic process modeling, and predictive optimization to address these challenges. A modified soot blowing protocol was developed using combustion process parameters to quantify heating surface cleanliness via a cleanliness factor (CF) dataset. A comprehensive model of the attenuation of heat transfer efficiency was constructed by analyzing the full-cycle interaction between ash accumulation, blowing operations, and post-blowing refouling, incorporating steam consumption during blowing phases. An optimized subtraction-based mean value algorithm was applied to minimize the cumulative attenuation of heat transfer efficiency by determining optimal blowing initiation/cessation thresholds. Furthermore, a bidirectional gated recurrent unit network with quantile regression (BiGRU-QR) was implemented for probabilistic blowing time prediction, capturing data distribution characteristics and prediction uncertainties. Validation on a 300 MW supercritical boiler in Guizhou demonstrated a 3.96% energy efficiency improvement, providing a practical solution for sustainable coal-fired power generation operations. Full article
Show Figures

Figure 1

25 pages, 3159 KiB  
Article
CLIP-BCA-Gated: A Dynamic Multimodal Framework for Real-Time Humanitarian Crisis Classification with Bi-Cross-Attention and Adaptive Gating
by Shanshan Li, Qingjie Liu, Zhian Pan and Xucheng Wu
Appl. Sci. 2025, 15(15), 8758; https://doi.org/10.3390/app15158758 - 7 Aug 2025
Viewed by 251
Abstract
During humanitarian crises, social media generates over 30 million multimodal tweets daily, but 20% textual noise, 40% cross-modal misalignment, and severe class imbalance (4.1% rare classes) hinder effective classification. This study presents CLIP-BCA-Gated, a dynamic multimodal framework that integrates bidirectional cross-attention (Bi-Cross-Attention) and [...] Read more.
During humanitarian crises, social media generates over 30 million multimodal tweets daily, but 20% textual noise, 40% cross-modal misalignment, and severe class imbalance (4.1% rare classes) hinder effective classification. This study presents CLIP-BCA-Gated, a dynamic multimodal framework that integrates bidirectional cross-attention (Bi-Cross-Attention) and adaptive gating within the CLIP architecture to address these challenges. The Bi-Cross-Attention module enables fine-grained cross-modal semantic alignment, while the adaptive gating mechanism dynamically weights modalities to suppress noise. Hierarchical learning rate scheduling and multidimensional data augmentation further optimize feature fusion for real-time multiclass classification. On the CrisisMMD benchmark, CLIP-BCA-Gated achieves 91.77% classification accuracy (1.55% higher than baseline CLIP and 2.33% over state-of-the-art ALIGN), with exceptional recall for critical categories: infrastructure damage (93.42%) and rescue efforts (92.15%). The model processes tweets at 0.083 s per instance, meeting real-time deployment requirements for emergency response systems. Ablation studies show Bi-Cross-Attention contributes 2.54% accuracy improvement, and adaptive gating contributes 1.12%. This work demonstrates that dynamic multimodal fusion enhances resilience to noisy social media data, directly supporting SDG 11 through scalable real-time disaster information triage. The framework’s noise-robust design and sub-second inference make it a practical solution for humanitarian organizations requiring rapid crisis categorization. Full article
Show Figures

Figure 1

25 pages, 22731 KiB  
Article
Scalable and Efficient GCL Scheduling for Time-Aware Shaping in Autonomous and Cyber-Physical Systems
by Chengwei Zhang and Yun Wang
Future Internet 2025, 17(8), 321; https://doi.org/10.3390/fi17080321 - 22 Jul 2025
Viewed by 279
Abstract
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring [...] Read more.
The evolution of the internet towards supporting time-critical applications, such as industrial cyber-physical systems (CPSs) and autonomous systems, has created an urgent demand for networks capable of providing deterministic, low-latency communication. Autonomous vehicles represent a particularly challenging use case within this domain, requiring both reliability and determinism for massive data streams—a requirement that traditional Ethernet technologies cannot satisfy. This paper addresses this critical gap by proposing a comprehensive scheduling framework based on Time-Aware Shaping (TAS) within the Time-Sensitive Networking (TSN) standard. The framework features two key contributions: (1) a novel baseline scheduling algorithm that incorporates a sub-flow division mechanism to enhance schedulability for high-bandwidth streams, computing Gate Control Lists (GCLs) via an iterative SMT-based method; (2) a separate heuristic-based computation acceleration algorithm to enable fast, scalable GCL generation for large-scale networks. Through extensive simulations, the proposed baseline algorithm demonstrates a reduction in end-to-end latency of up to 59% compared to standard methods, with jitter controlled at the nanosecond level. The acceleration algorithm is shown to compute schedules for 200 data streams in approximately one second. The framework’s effectiveness is further validated on a real-world TSN hardware testbed, confirming its capability to achieve deterministic transmission with low latency and jitter in a physical environment. This work provides a practical and scalable solution for deploying deterministic communication in complex autonomous and cyber-physical systems. Full article
Show Figures

Figure 1

5 pages, 1345 KiB  
Proceeding Paper
Improving Predictive Maintenance Performance Using Machine Learning and Vibration Analysis Algorithms
by Ibtissam Elharnaf, Khadija Achtaich and Samir Tetouani
Eng. Proc. 2025, 97(1), 45; https://doi.org/10.3390/engproc2025097045 - 2 Jul 2025
Viewed by 574
Abstract
This research examines advanced machine learning techniques utilized for the predictive maintenance of industrial machinery. A hybrid model combining long-term memory networks (LSTM) and gated recurrent unit (GRU) networks alongside a random forest classifier has been created utilizing vibration data collected from sensors [...] Read more.
This research examines advanced machine learning techniques utilized for the predictive maintenance of industrial machinery. A hybrid model combining long-term memory networks (LSTM) and gated recurrent unit (GRU) networks alongside a random forest classifier has been created utilizing vibration data collected from sensors for fault classification purposes. The method includes feature extraction, time series analysis, and classification, utilizing the benefits of these models to efficiently manage sequential data. The results show significant improvements in forecasting accuracy, reduced downtime, and better-aligned maintenance schedules. These advancements demonstrate the capabilitie of integrating AI-driven solutions into industrial systems, consistent with Industry 4.0 principles, to improve operational capabilities. Full article
Show Figures

Figure 1

23 pages, 3292 KiB  
Article
Multi-Objective Optimal Scheduling of Water Transmission and Distribution Channel Gate Groups Based on Machine Learning
by Yiying Du, Chaoyue Zhang, Rong Wei, Li Cao, Tiantian Zhao, Wene Wang and Xiaotao Hu
Agriculture 2025, 15(13), 1344; https://doi.org/10.3390/agriculture15131344 - 23 Jun 2025
Viewed by 466
Abstract
This study develops a synergistic optimization method of multiple gates integrating hydrodynamic simulation and data-driven methods, with the goal of improving the accuracy of water distribution and regulation efficiency. This approach addresses the challenges of large prediction deviation of hydraulic response and unclear [...] Read more.
This study develops a synergistic optimization method of multiple gates integrating hydrodynamic simulation and data-driven methods, with the goal of improving the accuracy of water distribution and regulation efficiency. This approach addresses the challenges of large prediction deviation of hydraulic response and unclear synergy mechanisms in the coupled regulation of multiple gates in irrigation areas. The NSGA-II multi-objective optimisation algorithm is used to minimise the water distribution error and the water level deviation before the gate as the objective function in order to achieve global optimisation of the regulation of the complex canal system. A one-dimensional hydrodynamic model based on St. Venant’s system of equations is built to generate the feature dataset, which is then combined with the random forest algorithm to create a nonlinear prediction model. An example analysis demonstrates that the optimal feedforward time of the open channel gate group is negatively connected with the flow condition and that the method can manage the water distribution error within 13.97% and the water level error within 13%. In addition to revealing the matching mechanism between the feedforward time and the flow condition, the study offers a stable and accurate solution for the cooperative regulation of multiple gates in irrigation districts. This effectively supports the need for precise water distribution in small irrigation districts. Full article
(This article belongs to the Section Agricultural Water Management)
Show Figures

Figure 1

30 pages, 5003 KiB  
Article
A Novel Truck Appointment System for Container Terminals
by Fatima Bouyahia, Sara Belaqziz, Youssef Meliani, Saâd Lissane Elhaq and Jaouad Boukachour
Sustainability 2025, 17(13), 5740; https://doi.org/10.3390/su17135740 - 22 Jun 2025
Viewed by 582
Abstract
Due to increased container traffic, the problems of congestion at terminal gates generate serious air pollution and decrease terminal efficiency. To address this issue, many terminals are implementing a truck appointment system (TAS) based on several concepts. Our work addresses gate congestion at [...] Read more.
Due to increased container traffic, the problems of congestion at terminal gates generate serious air pollution and decrease terminal efficiency. To address this issue, many terminals are implementing a truck appointment system (TAS) based on several concepts. Our work addresses gate congestion at a container terminal. A conceptual model was developed to identify system components and interactions, analyzing container flow from both static and dynamic perspectives. A truck appointment system (TAS) was modeled to optimize waiting times using a non-stationary approach. Compared to existing methods, our TAS introduces a more adaptive scheduling mechanism that dynamically adjusts to fluctuating truck arrivals, reducing peak congestion and improving resource utilization. Unlike traditional static appointment systems, our approach helps reduce truckers’ dissatisfaction caused by the deviation between the preferred time and the assigned one, leading to smoother operations. Various genetic algorithms were tested, with a hybrid genetic–tabu search approach yielding better results by improving solution stability and reducing computational time. The model was applied and adapted to the Port of Casablanca using real-world data. The results clearly highlight a significant potential to enhance sustainability, with an annual reduction of 785 tons of CO2 emissions from a total of 1281 tons. Regarding trucker dissatisfaction, measured by the percentage of trucks rescheduled from their preferred times, only 7.8% of arrivals were affected. This improvement, coupled with a 62% decrease in the maximum queue length, further promotes efficient and sustainable operations. Full article
(This article belongs to the Special Issue Innovations for Sustainable Multimodality Transportation)
Show Figures

Figure 1

23 pages, 1101 KiB  
Article
QELPS Algorithm: A Novel Dynamic Optimization Technology for Quantum Circuits Scheduling Engineering Problems
by Zuoqiang Du, Xingjie Li and Hui Li
Appl. Sci. 2025, 15(11), 6373; https://doi.org/10.3390/app15116373 - 5 Jun 2025
Viewed by 794
Abstract
In the noisy medium-scale quantum era, quantum computers are constrained by a limited number of qubits, restricted physical topological structures, and interference from environmental noise, making efficient and stable circuit scheduling a significant challenge. To improve the feasibility of quantum computing, it is [...] Read more.
In the noisy medium-scale quantum era, quantum computers are constrained by a limited number of qubits, restricted physical topological structures, and interference from environmental noise, making efficient and stable circuit scheduling a significant challenge. To improve the feasibility of quantum computing, it is essential to optimize the scheduling of quantum gates and the insertion of SWAP gates, reducing running time and enhancing computational efficiency. We propose a collaborative optimization framework that integrates the Quantum Exchange Lock Parallel Scheduler (QELPS) with the Full-level Joint Optimization SWAP Algorithm (FJOSA). In QELPS, SWAP conflict characteristics are used to adjust the layout of quantum gates across different levels while considering physical constraints and dynamically adapting to the circuit’s execution state. Quantum lock parallel technology enables the selective postponement of certain quantum gates, minimizing circuit depth and mitigating inefficiencies caused by excessive SWAP gate insertions. Meanwhile, FJOSA employs a cross-layer optimization strategy that combines heuristic algorithms with cost functions to improve gate scheduling at a global level. This approach effectively reduces quantum gate conflicts found in traditional methods and optimizes execution order, leading to better computational efficiency and circuit performance. Experimental results show that, compared to the traditional 2QAN algorithm, QELPS and FJOSA reduce additional gate insertions by 85.59% and 89.38%, respectively, while decreasing running time by 56.32% and 66.47%. These improvements confirm that the proposed method significantly enhances circuit scheduling efficiency and reduces resource consumption, making it a promising approach for optimizing quantum computation. Full article
Show Figures

Figure 1

13 pages, 3247 KiB  
Article
Multiscale Water Cycle Mechanisms and Return Flow Utilization in Paddy Fields of Plain Irrigation Districts
by Jie Zhang, Yujiang Xiong, Peihua Jiang, Niannian Yuan and Fengli Liu
Agriculture 2025, 15(11), 1178; https://doi.org/10.3390/agriculture15111178 - 29 May 2025
Viewed by 367
Abstract
This study aimed to reveal the characteristics of returned water in paddy fields at different scales and the rules of its reuse in China’s Ganfu Plain Irrigation District through multiscale (field, lateral canal, main canal, small watershed) observations, thereby optimizing water resource management [...] Read more.
This study aimed to reveal the characteristics of returned water in paddy fields at different scales and the rules of its reuse in China’s Ganfu Plain Irrigation District through multiscale (field, lateral canal, main canal, small watershed) observations, thereby optimizing water resource management and improving water use efficiency. Subsequent investigations during the 2021–2022 double-cropping rice seasons revealed that the tillering stage emerged as a critical drainage period, with 49.5% and 52.2% of total drainage occurring during this phase in early and late rice, respectively. Multiscale drainage heterogeneity displayed distinct patterns, with early rice following a “decrease-increase” trend while late rice exhibited “decrease-peak-decline” dynamics. Smaller scales (field and lateral canal) produced 37.1% higher drainage than larger scales (main canal and small watershed) during the reviving stage. In contrast, post-jointing-booting stages showed 103.6% higher drainage at larger scales. Return flow utilization peaked at the field-lateral canal scales, while dynamic regulation of Fangxi Lake’s storage capacity achieved 60% reuse efficiency at the watershed scale. We propose an integrated optimization strategy combining tillering-stage irrigation/drainage control, multiscale hydraulic interception (control gates and pond weirs), and dynamic watershed storage scheduling. This framework provides theoretical and practical insights for enhancing water use efficiency and mitigating non-point source pollution in plain irrigation districts. Full article
(This article belongs to the Section Agricultural Water Management)
Show Figures

Figure 1

19 pages, 4428 KiB  
Article
Research on the Impact of Gate Engineering on Seawater Exchange Capacity
by Mingchang Li, Xinran Jiang and Aizhen Liu
J. Mar. Sci. Eng. 2025, 13(6), 1078; https://doi.org/10.3390/jmse13061078 - 29 May 2025
Viewed by 397
Abstract
Over the past two decades, extensive coastal development in China has led to numerous small-scale enclosed coastal water bodies. Due to complex shoreline geometries, these areas suffer from disturbed hydrodynamic conditions, weak water exchange, which quickly leads to sediment accumulation, and difficulty maintaining [...] Read more.
Over the past two decades, extensive coastal development in China has led to numerous small-scale enclosed coastal water bodies. Due to complex shoreline geometries, these areas suffer from disturbed hydrodynamic conditions, weak water exchange, which quickly leads to sediment accumulation, and difficulty maintaining ecological water levels, posing serious environmental threats. Enhancing seawater exchange capacity and achieving coordinated optimization of exchange efficiency and ecological water level are critical prerequisites for the environmental restoration of eutrophic enclosed coastal areas. This study takes the Ligao Block in Tianjin as a case study and proposes a real-time sluice gate regulation scheme. By incorporating hydrodynamic conditions, engineering layout, and present characteristics of the benthic substrate environment, the number, width, location, and operation modes of sluice gates are optimized to maximize water exchange efficiency while maintaining natural flow patterns. The result of the numerical simulation of hydrodynamic exchange and intelligent optimization analysis reveals that the optimal sluice gate operation strategy should be tailored to regional tidal flow characteristics and substrate conditions. Through intelligent scheduling of exchange sluice gates, systematic gate parameter optimization, and active control of gate opening, this approach achieves intelligent seawater exchange, optimized flow dynamics, active exchange, and sustained ecological water levels in enclosed coastal water bodies. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

20 pages, 1502 KiB  
Article
Quantum Firefly Algorithm: A Novel Approach for Quantum Circuit Scheduling Optimization
by Zuoqiang Du, Jiepeng Wang and Hui Li
Electronics 2025, 14(11), 2123; https://doi.org/10.3390/electronics14112123 - 23 May 2025
Viewed by 579
Abstract
In the noisy intermediate-scale quantum (NISQ) era, as the scale of existing quantum hardware continues to expand, the demand for effective methods to schedule quantum gates and minimize the number of operations has become increasingly urgent. To address this demand, the Quantum Firefly [...] Read more.
In the noisy intermediate-scale quantum (NISQ) era, as the scale of existing quantum hardware continues to expand, the demand for effective methods to schedule quantum gates and minimize the number of operations has become increasingly urgent. To address this demand, the Quantum Firefly Algorithm (QFA) has been designed by incorporating quantum information into the traditional firefly algorithm. This integration enables fireflies to explore multiple positions simultaneously, thereby increasing search space coverage and utilizing quantum tunneling effects to escape local optima. Through wave function evolution and collapse mechanisms described by the Schrödinger equation, a balance between exploring new solutions and exploiting known solutions is achieved by the QFA. Additionally, random perturbation steps are incorporated into the algorithm to enhance search diversity and prevent the algorithm from being trapped in local optima. In quantum circuit scheduling problems, the QFA optimizes quantum gate operation sequences by evaluating the fitness of scheduling schemes, reducing circuit depth and movement operations, while improving parallelism. Experimental results demonstrate that, compared to traditional algorithms, the QFA reduces SWAP gates by an average of 44% and CNOT gates by an average of 16%. When compared to modern algorithms, it reduces SWAP gates by an average of 7% and CNOT gates by an average of 12%. Full article
Show Figures

Figure 1

14 pages, 753 KiB  
Article
A Hybrid Deep Learning-Based Load Forecasting Model for Logical Range
by Hao Chen and Zheng Dang
Appl. Sci. 2025, 15(10), 5628; https://doi.org/10.3390/app15105628 - 18 May 2025
Cited by 1 | Viewed by 392
Abstract
The Logical Range is a mission-oriented, reconfigurable environment that integrates testing, training, and simulation by virtually connecting distributed systems. In such environments, task-processing devices often experience highly dynamic workloads due to varying task demands, leading to scheduling inefficiencies and increased latency. To address [...] Read more.
The Logical Range is a mission-oriented, reconfigurable environment that integrates testing, training, and simulation by virtually connecting distributed systems. In such environments, task-processing devices often experience highly dynamic workloads due to varying task demands, leading to scheduling inefficiencies and increased latency. To address this, we propose GCSG, a hybrid load forecasting model tailored for Logical Range operations. GCSG transforms time-series device load data into image representations using Gramian Angular Field (GAF) encoding, extracts spatial features via a Convolutional Neural Network (CNN) enhanced with a Squeeze-and-Excitation network (SENet), and captures temporal dependencies using a Gated Recurrent Unit (GRU). Through the integration of spatial–temporal features, GCSG enables accurate load forecasting, supporting more efficient resource scheduling. Experiments show that GCSG achieves an R2 of 0.86, MAE of 4.5, and MSE of 34, outperforming baseline models in terms of both accuracy and generalization. Full article
Show Figures

Figure 1

31 pages, 4826 KiB  
Article
Hybrid CNN-GRU Forecasting and Improved Teaching–Learning-Based Optimization for Cost-Efficient Microgrid Energy Management
by Mishal Alharbi and Ali S. Alghamdi
Processes 2025, 13(5), 1452; https://doi.org/10.3390/pr13051452 - 9 May 2025
Viewed by 770
Abstract
In this paper, a two-stage framework is proposed for the energy management of microgrids, which combines a hybrid Convolutional Neural Network-Gated Recurrent Unit (CNN-GRU) forecast model and the Improved Teaching–Learning-Based Optimization (ITLBO) algorithm. The CNN-GRU model captures spatiotemporal patterns in historical data for [...] Read more.
In this paper, a two-stage framework is proposed for the energy management of microgrids, which combines a hybrid Convolutional Neural Network-Gated Recurrent Unit (CNN-GRU) forecast model and the Improved Teaching–Learning-Based Optimization (ITLBO) algorithm. The CNN-GRU model captures spatiotemporal patterns in historical data for effective renewable energy and load demand uncertainty quantification, while the ITLBO algorithm improves generation scheduling performance through utilization of adaptive luminance coefficients, Latin Hypercube initialization, and hybrid genetic operations. The proposed framework is then compared with four different forecasting models: standalone CNN or MLANN, and three popular optimization algorithms (PSO, TLBO, CO) for four cases, including baseline (perfect foresight), CNN-GRU forecast, CNN forecast, and MLANN forecast. The results show that the hybrid framework outperforms dedicated, in-domain models for forecast and scheduling, with the state-of-the-art CNN-GRU sliding window model producing the best forecasting accuracy, which subsequently translates into near-optimal scheduling performance. Through many experiments, we show that the ITLBO algorithm is robust and outperforms the classical optimization methods on convergence speed and solution quality while significantly eliminating the forecast errors uncertainty. Demand response is also a feature of these models, which boosts operational efficiency by scaling down peak grid usage without sacrificing affordability through energy saving capabilities. According to the results, the hybrid framework exhibits significant cost-efficiency by reducing the RMSE of solar irradiance forecasting by 11.6% when compared to standalone CNN and achieving a 69.7% reduction in operational costs under ITLBO optimization. The comparative analysis emphasizes the robustness and versatility of the framework, reinforcing its feasibility across a range of forecasting and optimization scenarios for real-world microgrid deployment. Full article
(This article belongs to the Section Energy Systems)
Show Figures

Figure 1

19 pages, 494 KiB  
Article
Hardware-Accelerated Data Readout Platform Using Heterogeneous Computing for DNA Data Storage
by Xiaopeng Gou, Qi Ge, Quan Guo, Menghui Ren, Tingting Qi, Rui Qin and Weigang Chen
Appl. Sci. 2025, 15(9), 5050; https://doi.org/10.3390/app15095050 - 1 May 2025
Viewed by 510
Abstract
DNA data storage has emerged as a promising alternative to traditional storage media due to its high density and durability. However, large-scale DNA storage systems generate massive sequencing reads, posing substantial computational complexity and latency challenges for data readout. Here, we propose a [...] Read more.
DNA data storage has emerged as a promising alternative to traditional storage media due to its high density and durability. However, large-scale DNA storage systems generate massive sequencing reads, posing substantial computational complexity and latency challenges for data readout. Here, we propose a novel heterogeneous computing architecture based on a field-programmable gate array (FPGA) to accelerate DNA data readout. The software component, running on a general computing platform, manages data distribution and schedules acceleration kernels. Meanwhile, the hardware acceleration kernel is deployed on an Alveo U200 data center accelerator card, executing multiple logical computing units within modules and utilizing task-level pipeline structures between modules to handle sequencing reads step by step. This heterogeneous computing acceleration system enables the efficient execution of the entire readout process for DNA data storage. We benchmark the proposed system against a CPU-based software implementation under various error rates and coverages. The results indicate that under high-error, low-coverage conditions (error rate of 1.5% and coverage of 15×), the accelerator achieves a peak speedup of up to 373.1 times, enabling the readout of 59.4 MB of stored data in just 12.40 s. Overall, the accelerator delivers a speedup of two orders of magnitude. Our proposed heterogeneous computing acceleration strategy provides an efficient solution for large-scale DNA data readout. Full article
Show Figures

Figure 1

Back to TopTop