Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (162)

Search Parameters:
Keywords = gate scheduling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
48 pages, 1031 KB  
Review
The Effectiveness of Transcranial Direct Current Stimulation (tDCS) in Improving Performance in Soccer Players—A Scoping Review
by James Chmiel and Donata Kurpas
J. Clin. Med. 2026, 15(3), 1281; https://doi.org/10.3390/jcm15031281 - 5 Feb 2026
Abstract
Background/Objectives: Transcranial direct current stimulation (tDCS) is increasingly used by athletes, yet sport-performance-enhancement findings are mixed and often small, with outcomes depending on stimulation target, timing, and task demands. Aim: This scoping review mapped and synthesized the soccer-specific trial evidence to identify (i) [...] Read more.
Background/Objectives: Transcranial direct current stimulation (tDCS) is increasingly used by athletes, yet sport-performance-enhancement findings are mixed and often small, with outcomes depending on stimulation target, timing, and task demands. Aim: This scoping review mapped and synthesized the soccer-specific trial evidence to identify (i) which tDCS targets and application schedules have been tested in soccer players, (ii) which soccer-relevant outcomes show the most consistent immediate (minutes–hours) or training-mediated benefits, and (iii) where evidence gaps persist. Methods: We conducted a scoping review of clinical trials in footballers, following review best-practice guidance (PRISMA-informed) and a preregistered protocol. Searches (August 2025) spanned PubMed/MEDLINE, ResearchGate, Google Scholar, and Cochrane, using combinations of “football/soccer” and “tDCS/transcranial direct current stimulation,” with inclusion restricted to trials from 2008–2025. Dual independent screening was applied. Of 47 records identified, 21 studies met the criteria. Across these, the total N was 593 (predominantly male adolescents/young adults; wide range of levels). Results: Prefrontal protocols—most commonly left-dominant dorsolateral prefrontal cortex (DLPFC) (+F3/−F4, ~2 mA, ~20 min)—most consistently improved post-match recovery status/well-being (e.g., fatigue, sleep quality, muscle soreness, stress, mood), and when repeated and/or paired with practice, shortened decision times and promoted more efficient visual search. Effects on classic executive tests were inconsistent, and bilateral anodal DLPFC under fatigue increased risk-tolerant choices. Motor-cortex targeting (C3/C4/Cz) rarely changed rapid force–power performance after a single session—e.g., multiple well-controlled trials found no immediate CMJ gains—but when paired with multi-week training (core/lumbar stability, plyometrics, HIIT, sling), it augmented strength, jump height, sprint/agility, aerobic capacity, and task-relevant EMG. Autonomic markers (exercise HR, early HR recovery) showed time-dependent normalization without specific tDCS effects in single-session, randomized designs. In contrast, a season-long applied program that added prefrontal stimulation to standard recovery reported significantly reduced creatine kinase. Across studies, protocols and masking were athlete-friendly and rigorous (~2 mA for ~20 min; robust sham/blinding), with only mild, transient sensations reported and no serious adverse events. Conclusions: In soccer players, tDCS shows a qualified pattern of benefits that follows a specificity model: prefrontal stimulation can support post-match recovery status/well-being and decision efficiency, while M1-centered stimulation is most effective when coupled with structured training to bias neuromuscular adaptation. Effects are generally modest and heterogeneous; practitioners should treat tDCS as an adjunct, not a stand-alone enhancer, and align montage × task × timing while monitoring individual responses. Full article
(This article belongs to the Section Clinical Rehabilitation)
18 pages, 2458 KB  
Article
An Interpretable CPU Scheduling Method Based on a Multiscale Frequency-Domain Convolutional Transformer and a Dendritic Network
by Xiuwei Peng, Honghua Wang, Guohui Zhou, Jun Jiang, Hao Fang, Zhengxing Wu and Xiaohui Li
Electronics 2026, 15(3), 693; https://doi.org/10.3390/electronics15030693 - 5 Feb 2026
Abstract
In modern operating systems, CPU scheduling policy selection and evaluation still rely mainly on heuristic methods, especially at the single-processor level or the abstract ready-queue level, and there is still a lack of systematic modeling and interpretable analysis for complex workload patterns. Traditional [...] Read more.
In modern operating systems, CPU scheduling policy selection and evaluation still rely mainly on heuristic methods, especially at the single-processor level or the abstract ready-queue level, and there is still a lack of systematic modeling and interpretable analysis for complex workload patterns. Traditional approaches are easy to implement and respond quickly in specific scenarios, but they often fail to remain stable under dynamic workloads and high-dimensional features, which can harm generalization. In this work, we build a simulation dataset that covers five typical scheduling policies, redesign a deep learning framework for scheduling policy identification, and propose the MCFCTransformer-DD model. The model extends the standard Transformer with multiscale convolution, frequency-domain augmentation, and cross-attention to capture both low-frequency and high-frequency signals, learn local and global patterns, and model multivariate dependencies. We also introduce a Dendrite Network, or DD, into scheduling policy identification and decision support for the first time, and its gated dendritic structure provides a more transparent nonlinear decision boundary that reduces the black-box nature of deep models and helps mitigate overfitting. Experiments show that MCFCTransformer-DD achieves 94.50% accuracy, a 94.65% F1 score, and an AUROC of 1.00, which indicates strong policy identification performance and strong potential for decision support. Full article
Show Figures

Figure 1

23 pages, 5636 KB  
Article
Research on Interpretable Tourism Demand Forecasting Based on VSN–xLSTM Model
by Hanpo Hou and Haiying Wang
Systems 2026, 14(2), 146; https://doi.org/10.3390/systems14020146 - 30 Jan 2026
Viewed by 117
Abstract
To address the limitations of traditional tourism demand forecasting models in leveraging multi-source data and their lack of interpretability, this study proposes an integrated multi-data-driven interpretable forecasting framework incorporating historical visitor volumes, social media activities, holiday schedules, weather conditions, and seasonal indicators. This [...] Read more.
To address the limitations of traditional tourism demand forecasting models in leveraging multi-source data and their lack of interpretability, this study proposes an integrated multi-data-driven interpretable forecasting framework incorporating historical visitor volumes, social media activities, holiday schedules, weather conditions, and seasonal indicators. This study develops a system-oriented tourism demand forecasting framework that integrates a Variable Selection Network (VSN) and an enhanced long short-term memory (xLSTM) architecture to jointly model and interpret multi-source demand drivers. The VSN module employs a dynamic feature weighting mechanism to automatically discern distribution characteristics and relevance variations across heterogeneous data sources, thereby assigning adaptive weights to input variables. The xLSTM model incorporates innovative exponential gating and matrix memory structures, enabling rapid adaptation to sudden tourist flow fluctuations while effectively capturing long-term cyclical dependencies. By combining VSN-derived feature importance weights with SHAP-based prediction attribution analysis, this framework offers dual-level interpretability—in both input feature selection and output explanation. Experimental results demonstrate that social media data significantly reflect tourist attention and travel intention and reveal distinctive demand-driving mechanisms for various types of tourism destinations. The study provides theoretical insights and empirical support for advancing tourism demand forecasting and management strategies. Full article
Show Figures

Figure 1

22 pages, 3757 KB  
Article
Electric Vehicle Cluster Charging Scheduling Optimization: A Forecast-Driven Multi-Objective Reinforcement Learning Method
by Yi Zhao, Xian Jia, Shuanbin Tan, Yan Liang, Pengtao Wang and Yi Wang
Energies 2026, 19(3), 647; https://doi.org/10.3390/en19030647 - 27 Jan 2026
Viewed by 102
Abstract
The widespread adoption of electric vehicles (EVs) has posed significant challenges to the security of distribution grid loads. To address issues such as increased grid load fluctuations, rising user charging costs, and rapid load surges around midnight caused by uncoordinated nighttime charging of [...] Read more.
The widespread adoption of electric vehicles (EVs) has posed significant challenges to the security of distribution grid loads. To address issues such as increased grid load fluctuations, rising user charging costs, and rapid load surges around midnight caused by uncoordinated nighttime charging of household electric vehicles in communities, this paper first models electric vehicle charging behavior as a Markov Decision Process (MDP). By improving the state-space sampling mechanism, a continuous space mapping and a priority mechanism are designed to transform the charging scheduling problem into a continuous decision-making framework while optimizing the dynamic adjustment between state and action spaces. On this basis, to achieve synergistic load forecasting and charging scheduling decisions, a forecast-augmented deep reinforcement learning method integrating Gated Recurrent Unit and Twin Delayed Deep Deterministic Policy Gradient (GRU-TD3) is proposed. This method constructs a multi-objective reward function that comprehensively considers time-of-use electricity pricing, load stability, and user demands. The method also applies a single-objective pre-training phase and a model-specific importance-sampling strategy to improve learning efficiency and policy stability. Its effectiveness is verified through extensive comparative and ablation validation. The results show that our method outperforms several benchmarks. Specifically, compared to the Deep Deterministic Policy Gradient (DDPG) and Particle Swarm Optimization (PSO) algorithms, it reduces user costs by 11.7% and the load standard deviation by 12.9%. In contrast to uncoordinated charging strategies, it achieves a 42.5% reduction in user costs and a 20.3% decrease in load standard deviation. Moreover, relative to single-objective cost optimization approaches, the proposed algorithm effectively suppresses short-term load growth rates and mitigates the “midnight peak” phenomenon. Full article
Show Figures

Figure 1

45 pages, 1326 KB  
Article
Cross-Domain Deep Reinforcement Learning for Real-Time Resource Allocation in Transportation Hubs: From Airport Gates to Seaport Berths
by Zihao Zhang, Qingwei Zhong, Weijun Pan, Yi Ai and Qian Wang
Aerospace 2026, 13(1), 108; https://doi.org/10.3390/aerospace13010108 - 22 Jan 2026
Viewed by 142
Abstract
Efficient resource allocation is critical for transportation hub operations, yet current scheduling systems require substantial domain-specific customization when deployed across different facilities. This paper presents a domain-adaptive deep reinforcement learning (DADRL) framework that learns transferable optimization policies for dynamic resource allocation across structurally [...] Read more.
Efficient resource allocation is critical for transportation hub operations, yet current scheduling systems require substantial domain-specific customization when deployed across different facilities. This paper presents a domain-adaptive deep reinforcement learning (DADRL) framework that learns transferable optimization policies for dynamic resource allocation across structurally similar transportation scheduling problems. The framework integrates dual-level heterogeneous graph attention networks for separating constraint topology from domain-specific features, hypergraph-based constraint modeling for capturing high-order dependencies, and hierarchical policy decomposition that reduces computational complexity from O(mnT) to O(m+n+T). Evaluated on realistic simulators modeling airport gate assignment (Singapore Changi: 50 gates, 300–400 daily flights) and seaport berth allocation (Singapore Port: 40 berths, 80–120 daily vessels), DADRL achieves 87.3% resource utilization in airport operations and 86.3% in port operations, outperforming commercial solvers under strict real-time constraints (Gurobi-MIP with 300 s time limit: 85.1%) while operating 270 times faster (1.1 s versus 298 s per instance). Given unlimited time, Gurobi achieves provably optimal solutions, but DADRL reaches 98.7% of this optimum in 1.1 s, making it suitable for time-critical operational scenarios where exact solvers are computationally infeasible. Critically, policies trained exclusively on airport scenarios retain 92.4% performance when applied to ports without retraining, requiring only 800 adaptation steps compared to 13,200 for domain-specific training. The framework maintains 86.2% performance under operational disruptions and scales to problems three times larger than training instances with only 7% degradation. These results demonstrate that learned optimization principles can generalize across transportation scheduling problems sharing common constraint structures, enabling rapid deployment of AI-based scheduling systems across multi-modal transportation networks with minimal customization and reduced implementation costs. Full article
(This article belongs to the Special Issue Emerging Trends in Air Traffic Flow and Airport Operations Control)
Show Figures

Figure 1

47 pages, 17315 KB  
Article
RNN Architecture-Based Short-Term Forecasting Framework for Rooftop PV Surplus to Enable Smart Energy Scheduling in Micro-Residential Communities
by Abdo Abdullah Ahmed Gassar, Mohammad Nazififard and Erwin Franquet
Buildings 2026, 16(2), 390; https://doi.org/10.3390/buildings16020390 - 17 Jan 2026
Viewed by 155
Abstract
With growing community awareness of greenhouse gas emissions and their environmental consequences, distributed rooftop photovoltaic (PV) systems have emerged as a sustainable energy alternative in residential settings. However, the high penetration of these systems without effective operational strategies poses significant challenges for local [...] Read more.
With growing community awareness of greenhouse gas emissions and their environmental consequences, distributed rooftop photovoltaic (PV) systems have emerged as a sustainable energy alternative in residential settings. However, the high penetration of these systems without effective operational strategies poses significant challenges for local distribution grids. Specifically, the estimation of surplus energy production from these systems, closely linked to complex outdoor weather conditions and seasonal fluctuations, often lacks an accurate forecasting approach to effectively capture the temporal dynamics of system output during peak periods. In response, this study proposes a recurrent neural network (RNN)- based forecasting framework to predict rooftop PV surplus in the context of micro-residential communities over time horizons not exceeding 48 h. The framework includes standard RNN, long short-term memory (LSTM), bidirectional LSTM (BiLSTM), and gated recurrent unit (GRU) networks. In this context, the study employed estimated surplus energy datasets from six single-family detached houses, along with weather-related variables and seasonal patterns, to evaluate the framework’s effectiveness. Results demonstrated the significant effectiveness of all framework models in forecasting surplus energy across seasonal scenarios, with low MAPE values of up to 3.02% and 3.59% over 24-h and 48-h horizons, respectively. Simultaneously, BiLSTM models consistently demonstrated a higher capacity to capture surplus energy fluctuations during peak periods than their counterparts. Overall, the developed data-driven framework demonstrates potential to enable short-term smart energy scheduling in micro-residential communities, supporting electric vehicle charging from single-family detached houses through efficient rooftop PV systems. It also provides decision-making insights for evaluating renewable energy contributions in the residential sector. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

19 pages, 1098 KB  
Article
Simulation-Based Evaluation of AI-Orchestrated Port–City Logistics
by Nistor Andrei
Urban Sci. 2026, 10(1), 58; https://doi.org/10.3390/urbansci10010058 - 17 Jan 2026
Viewed by 419
Abstract
AI technologies are increasingly applied to optimize operations in both port and urban logistics systems, yet integration across the full maritime city chain remains limited. The objective of this study is to assess, using a simulation-based experiment, the impact of an AI-orchestrated control [...] Read more.
AI technologies are increasingly applied to optimize operations in both port and urban logistics systems, yet integration across the full maritime city chain remains limited. The objective of this study is to assess, using a simulation-based experiment, the impact of an AI-orchestrated control policy on the performance of port–city logistics relative to a baseline scheduler. The study proposes an AI-orchestrated approach that connects autonomous ships, smart ports, central warehouses, and multimodal urban networks via a shared cloud control layer. This approach is designed to enable real-time, cross-domain coordination using federated sensing and adaptive control policies. To evaluate its impact, a simulation-based experiment was conducted comparing a traditional scheduler with an AI-orchestrated policy across 20 paired runs under identical conditions. The orchestrator dynamically coordinated container dispatching, vehicle assignment, and gate operations based on capacity-aware logic. Results show that the AI policy substantially reduced the total completion time, lowered truck idle time and estimated emissions, and improved system throughput and predictability without modifying physical resources. These findings support the expectation that integrated, data-driven decision-making can significantly enhance logistics performance and sustainability in port–city contexts. The study provides a replicable pathway from conceptual architecture to quantifiable evidence and lays the groundwork for future extensions involving learning controllers, richer environmental modeling, and real-world deployment in digitally connected logistics corridors. Full article
(This article belongs to the Special Issue Advances in Urban Planning and the Digitalization of City Management)
Show Figures

Figure 1

21 pages, 6454 KB  
Article
Probabilistic Photovoltaic Power Forecasting with Reliable Uncertainty Quantification via Multi-Scale Temporal–Spatial Attention and Conformalized Quantile Regression
by Guanghu Wang, Yan Zhou, Yan Yan, Zhihan Zhou, Zikang Yang, Litao Dai and Junpeng Huang
Sustainability 2026, 18(2), 739; https://doi.org/10.3390/su18020739 - 11 Jan 2026
Viewed by 293
Abstract
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting [...] Read more.
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting framework based on a Multi-scale Temporal–Spatial Attention Quantile Regression Network (MTSA-QRN) and an adaptive calibration mechanism to enhance uncertainty quantification and ensure statistically reliable prediction intervals. The framework employs a dual-pathway architecture: a temporal pathway combining Temporal Convolutional Networks (TCN) and multi-head self-attention to capture hierarchical temporal dependencies, and a spatial pathway based on Graph Attention Networks (GAT) to model nonlinear meteorological correlations. A learnable gated fusion mechanism adaptively integrates temporal–spatial representations, and weather-adaptive modules enhance robustness under diverse atmospheric conditions. Multi-quantile prediction intervals are calibrated using conformalized quantile regression to ensure reliable uncertainty coverage. Experiments on a real-world PV dataset (15 min resolution) demonstrate that the proposed method offers more accurate and sharper uncertainty estimates than competitive benchmarks, supporting risk-aware operational decision-making in power systems. Quantitative evaluation on a real-world 40 MW photovoltaic plant demonstrates that the proposed MTSA-QRN achieves a CRPS of 0.0400 before calibration, representing an improvement of over 55% compared with representative deep learning baselines such as Quantile-GRU, Quantile-LSTM, and Quantile-Transformer. After adaptive calibration, the proposed method attains a reliable empirical coverage close to the nominal level (PICP90 = 0.9053), indicating effective uncertainty calibration. Although the calibrated prediction intervals become wider, the model maintains a competitive CRPS value (0.0453), striking a favorable balance between reliability and probabilistic accuracy. These results demonstrate the effectiveness of the proposed framework for reliable probabilistic photovoltaic power forecasting. Full article
(This article belongs to the Topic Sustainable Energy Systems)
Show Figures

Figure 1

28 pages, 3293 KB  
Article
Assessment of Potential Predictors of Aortic Stenosis Severity Using ECG-Gated Multidetector CT in Patients with Bicuspid and Tricuspid Aortic Valves Prior to TAVI
by Piotr Machowiec, Piotr Przybylski and Elżbieta Czekajska-Chehab
J. Clin. Med. 2026, 15(2), 551; https://doi.org/10.3390/jcm15020551 - 9 Jan 2026
Viewed by 294
Abstract
Background/Objectives: The aim of this study was to evaluate the usefulness of selected predictive parameters obtainable from cardiac multidetector computed tomography for assessing the severity of aortic valve stenosis in patients scheduled for transcatheter aortic valve implantation (TAVI). Methods: A detailed [...] Read more.
Background/Objectives: The aim of this study was to evaluate the usefulness of selected predictive parameters obtainable from cardiac multidetector computed tomography for assessing the severity of aortic valve stenosis in patients scheduled for transcatheter aortic valve implantation (TAVI). Methods: A detailed retrospective analysis was performed on 105 patients with a bicuspid aortic valve (BAV), selected from a cohort of 1000 patients with BAV confirmed on ECG-gated CT, and on 105 patients with a tricuspid aortic valve (TAV) matched for sex and age. All patients included in both groups had significant aortic stenosis confirmed on transthoracic echocardiography. Results: Across the entire cohort, a trend toward higher aortic valve calcium scores was observed in patients with bicuspid compared to tricuspid aortic valves (4194.8 ± 2748.7 vs. 3335.0 ± 1618.8), although this difference did not reach statistical significance (p = 0.080). However, sex-stratified analysis showed higher calcium scores in males with BAV than with TAV (5596.8 ± 2936.6 vs. 4061.4 ± 1659.8, p = 0.002), with no significant difference observed among females (p > 0.05). Univariate regression analysis showed that the aortic valve calcium score was the strongest statistically significant predictor of aortic stenosis severity in both groups, with R2 = 0.224 for BAV and R2 = 0.479 for TAV. In the multiple regression model without interaction terms, the explanatory power increased to R2 = 0.280 for BAV and R2 = 0.495 for TAV. Conclusions: In patients scheduled for TAVI, linear regression models assess the severity of aortic stenosis more accurately than any individual predictive parameter obtainable from ECG-CT, with the aortic valve Agatston score emerging as the most reliable single CT-derived predictor of stenosis severity in both TAV and BAV subgroups. Full article
(This article belongs to the Special Issue Advances in Cardiovascular Computed Tomography (CT))
Show Figures

Figure 1

26 pages, 7097 KB  
Article
Two-Phase Distributed Genetic-Based Algorithm for Time-Aware Shaper Scheduling in Industrial Sensor Networks
by Ray-I Chang, Ting-Wei Hsu and Yen-Ting Chen
Sensors 2026, 26(2), 377; https://doi.org/10.3390/s26020377 - 6 Jan 2026
Viewed by 306
Abstract
Time-Sensitive Networking (TSN), particularly the Time-Aware Shaper (TAS) specified by IEEE 802.1Qbv, is critical for real-time communication in Industrial Sensor Networks (ISNs). However, many TAS scheduling approaches rely on centralized computation and can face scalability bottlenecks in large networks. In addition, global-only schedulers [...] Read more.
Time-Sensitive Networking (TSN), particularly the Time-Aware Shaper (TAS) specified by IEEE 802.1Qbv, is critical for real-time communication in Industrial Sensor Networks (ISNs). However, many TAS scheduling approaches rely on centralized computation and can face scalability bottlenecks in large networks. In addition, global-only schedulers often generate fragmented Gate Control Lists (GCLs) that exceed per-port entry limits on resource-constrained switches, reducing deployability. This paper proposes a two-phase distributed genetic-based algorithm, 2PDGA, for TAS scheduling. Phase I runs a network-level genetic algorithm (GA) to select routing paths and release offsets and construct a conflict-free baseline schedule. Phase II performs per-switch local refinement to merge windows and enforce device-specific GCL caps with lightweight coordination. We evaluate 2PDGA on 1512 configurations (three topologies, 8–20 switches, and guard bands δgb{0, 100, 200} ns). At δgb=0 ns, 2PDGA achieves 92.9% and 99.8% CAP@8/CAP@16, respectively, compliance while maintaining a median latency of 42.1 μs. Phase II reduces the average max-per-port GCL entries by 7.7%. These results indicate improved hardware deployability under strict GCL caps, supporting practical deployment in real-world Industry 4.0 applications. Full article
Show Figures

Figure 1

23 pages, 4379 KB  
Article
Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) for Optimized Indoor Environment Modeling in Sports Halls
by Ping Wang, Xiaolong Chen, Hongfeng Zhang, Cora Un In Wong and Bin Long
Buildings 2026, 16(1), 113; https://doi.org/10.3390/buildings16010113 - 26 Dec 2025
Viewed by 366
Abstract
We propose a Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) architecture for optimized indoor environment modeling in sports halls, addressing the computational and scalability challenges of high-resolution spatiotemporal data processing. The sports hall is partitioned into distinct zones, each processed by dedicated CNN branches to [...] Read more.
We propose a Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) architecture for optimized indoor environment modeling in sports halls, addressing the computational and scalability challenges of high-resolution spatiotemporal data processing. The sports hall is partitioned into distinct zones, each processed by dedicated CNN branches to extract localized spatial features, while hierarchical LSTMs capture both short-term zone-specific dynamics and long-term inter-zone dependencies. The system integrates model and data parallelism to distribute workloads across specialized hardware, dynamically balanced to minimize computational bottlenecks. A gated fusion mechanism combines spatial and temporal features adaptively, enabling robust predictions of environmental parameters such as temperature and humidity. The proposed method replaces monolithic CNN-LSTM pipelines with a distributed framework, significantly improving efficiency without sacrificing accuracy. Furthermore, the architecture interfaces seamlessly with existing sensor networks and control systems, prioritizing critical zones through a latency-aware scheduler. Implemented on NVIDIA Jetson AGX Orin edge devices and Google Cloud TPU v4 pods, HPTS-CL demonstrates superior performance in real-time scenarios, leveraging lightweight EfficientNetV2-S for CNNs and IndRNN cells for LSTMs to mitigate gradient vanishing. Experimental results validate the system’s ability to handle large-scale, high-frequency sensor data while maintaining low inference latency, making it a practical solution for intelligent indoor environment optimization. The novelty lies in the hybrid parallelism strategy and hierarchical temporal modeling, which collectively advance the state of the art in distributed spatiotemporal deep learning. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

24 pages, 3158 KB  
Article
Ultra-Short-Term Multi-Step Photovoltaic Power Forecasting Based on Similarity-Based Daily Clustering
by Yongcheng Jin, Zhichao Sun, Dongliang Lv, Weicheng Gao, Fengze Liu and Qinghua Yu
Energies 2026, 19(1), 29; https://doi.org/10.3390/en19010029 - 20 Dec 2025
Viewed by 444
Abstract
Photovoltaic (PV) power generation is inherently intermittent and volatile, complicating power system operation and control. Accurate forecasting is crucial for proactive grid responses and optimal energy resource scheduling. This study proposes a novel hybrid forecasting model that achieves high-precision PV power forecasting by [...] Read more.
Photovoltaic (PV) power generation is inherently intermittent and volatile, complicating power system operation and control. Accurate forecasting is crucial for proactive grid responses and optimal energy resource scheduling. This study proposes a novel hybrid forecasting model that achieves high-precision PV power forecasting by integrating similar-day clustering, generating extreme weather samples, and optimizing the Bidirectional Temporal Convolutional Network (BiTCN) and Bidirectional Gated Recurrent Unit (BiGRU) model via the Animated Oat Optimization (AOO) algorithm. The proposed method outperforms other models in the three evaluation metrics of mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2). The innovations lie in the integration of similar-day clustering with deep learning and the application of AOO for hyperparameter optimization, which significantly enhances forecasting accuracy and robustness. Full article
Show Figures

Figure 1

17 pages, 4543 KB  
Article
Research on Joint Regulation Strategy of Water Conservancy Project Group in the Multi-Branch Channels of the Ganjiang River Tail for Coping with Dry Events
by Yang Xia, Yue Liu, Zhichao Wang, Zhiwen Huang, Wensun You and Taotao Zhang
Water 2026, 18(1), 13; https://doi.org/10.3390/w18010013 - 19 Dec 2025
Viewed by 450
Abstract
The problem of low water level and uneven distribution of flow in the multi-branch channels at the tail of the Ganjiang River (GJRT) during the dry season has been affecting the local water supply, navigation, and aquatic ecological environment. In recent years, water [...] Read more.
The problem of low water level and uneven distribution of flow in the multi-branch channels at the tail of the Ganjiang River (GJRT) during the dry season has been affecting the local water supply, navigation, and aquatic ecological environment. In recent years, water conservancy projects have been built in each branch of the multi-branch channels at the GJRT. Finding a way to utilize the water conservancy project group to carry out joint regulation and meet the water level and discharge requirements of each branch is an important issue that urgently needs to be solved. This paper analyzes the hydrodynamic process and its impact on water supply, navigation, and ecology in multi-branch channels without water conservation projects through hydrological data analysis and numerical simulation. By conducting numerical experiments on joint regulation of water conservation project group, a multi-objective regulation strategy is proposed to meet the water level and discharge of each branch. The results indicate that the discharge at the GJRT has been continuously decreasing from 1 September. Due to the jacking effect of Poyang Lake, the water level plunges at the GJRT from 1 October, which occurred later than the decrease in water level. The disruption of water levels and discharge makes it difficult to meet the regional water demand. The optimal time to initiate regulation is 1 October, and the target water level of Waizhou Station is 15.5 m, located upstream of the Ganjiang River tail. When the water level before each branch project gate is uniform and exceeds 15.5 m, the water level of Waizhou Station satisfies the requirement. However, the discharge of each branch does not meet the demand. In contrast to a scheduling regulation strategy that maintains the same water level in front of each gate, adopting a strategy with different water levels before each gate can effectively adjust the diversion ratio and fulfill the discharge demand of each branch at the tail of the Ganjiang River. Full article
(This article belongs to the Special Issue Optimization–Simulation Modeling of Sustainable Water Resource)
Show Figures

Figure 1

31 pages, 1771 KB  
Article
Forecasting Energy Demand in Quicklime Manufacturing: A Data-Driven Approach
by Jersson X. Leon-Medina, John Erick Fonseca Gonzalez, Nataly Yohana Callejas Rodriguez, Mario Eduardo González Niño, Saúl Andrés Hernández Moreno, Wilman Alonso Pineda-Munoz, Claudia Patricia Siachoque Celys, Bernardo Umbarila Suarez and Francesc Pozo
Sensors 2025, 25(24), 7632; https://doi.org/10.3390/s25247632 - 16 Dec 2025
Viewed by 544
Abstract
This study presents a deep learning-based framework for forecasting energy demand in a quicklime production company, aiming to enhance operational efficiency and enable data-driven decision-making for industrial scalability. Using one year of real electricity consumption data, the methodology integrates temporal and operational variables—such [...] Read more.
This study presents a deep learning-based framework for forecasting energy demand in a quicklime production company, aiming to enhance operational efficiency and enable data-driven decision-making for industrial scalability. Using one year of real electricity consumption data, the methodology integrates temporal and operational variables—such as load profile, active power, shift indicators, and production-related proxies—to capture the dynamics of energy usage throughout the manufacturing process. Several neural network architectures, including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Conv1D models, were trained and compared to predict short-term power demand with 10-min resolution. Among these, the GRU model achieved the highest predictive accuracy, with a best performance of RMSE = 2.18 kW, MAE = 0.49 kW, and SMAPE = 3.64% on the test set. The resulting forecasts support cost-efficient scheduling under time-of-use tariffs and provide valuable insights for infrastructure planning, capacity management, and sustainability optimization in energy-intensive industries. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

29 pages, 539 KB  
Article
FedRegNAS: Regime-Aware Federated Neural Architecture Search for Privacy-Preserving Stock Price Forecasting
by Zizhen Chen, Haobo Zhang, Shiwen Wang and Junming Chen
Electronics 2025, 14(24), 4902; https://doi.org/10.3390/electronics14244902 - 12 Dec 2025
Viewed by 2018
Abstract
Financial time series are heterogeneous, nonstationary, and dispersed across institutions that cannot share raw data. While federated learning enables collaborative modeling under privacy constraints, fixed architectures struggle to accommodate cross-market drift and device-resource diversity; conversely, existing neural architecture search techniques presume centralized data [...] Read more.
Financial time series are heterogeneous, nonstationary, and dispersed across institutions that cannot share raw data. While federated learning enables collaborative modeling under privacy constraints, fixed architectures struggle to accommodate cross-market drift and device-resource diversity; conversely, existing neural architecture search techniques presume centralized data and typically ignore communication, latency, and privacy budgets. This paper introduces FedRegNAS, a regime-aware federated NAS framework that jointly optimizes forecasting accuracy, communication cost, and on-device latency under user-level (ε,δ)-differential privacy. FedRegNAS trains a shared temporal supernet composed of candidate operators (dilated temporal convolutions, gated recurrent units, and attention blocks) with regime-conditioned gating and lightweight market-aware personalization. Clients perform differentiable architecture updates locally via Gumbel-Softmax and mirror descent; the server aggregates architecture distributions through Dirichlet barycenters with participation-weighted trust, while model weights are combined by adaptive, staleness-robust federated averaging. A risk-sensitive objective emphasizes downside errors and integrates transaction-cost-aware profit terms. We further inject calibrated noise into architecture gradients to decouple privacy leakage from weight updates and schedule search-to-train phases to reduce communication. Across three real-world equity datasets, FedRegNAS improves directional accuracy by 3–7 percentage points and Sharpe ratio by 18–32%. Ablations highlight the importance of regime gating and barycentric aggregation, and analyses outline convergence of the architecture mirror-descent under standard smoothness assumptions. FedRegNAS yields adaptive, privacy-aware architectures that translate into materially better trading-relevant forecasts without centralizing data. Full article
(This article belongs to the Special Issue Security and Privacy in Distributed Machine Learning)
Show Figures

Figure 1

Back to TopTop