Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,075)

Search Parameters:
Keywords = adaptive learning rate method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1621 KB  
Article
Reinforcement Learning-Based Optimization of Environmental Control Systems in Battery Energy Storage Rooms
by So-Yeon Park, Deun-Chan Kim and Jun-Ho Bang
Energies 2026, 19(2), 516; https://doi.org/10.3390/en19020516 - 20 Jan 2026
Abstract
This study proposes a reinforcement learning (RL)-based optimization framework for the environmental control system of battery rooms in Energy Storage Systems (ESS). Conventional rule-based air-conditioning strategies are unable to adapt to real-time temperature and humidity fluctuations, often leading to excessive energy consumption or [...] Read more.
This study proposes a reinforcement learning (RL)-based optimization framework for the environmental control system of battery rooms in Energy Storage Systems (ESS). Conventional rule-based air-conditioning strategies are unable to adapt to real-time temperature and humidity fluctuations, often leading to excessive energy consumption or insufficient thermal protection. To overcome these limitations, both value-based (DQN, Double DQN, Dueling DQN) and policy-based (Policy Gradient, PPO, TRPO) RL algorithms are implemented and systematically compared. The algorithms are trained and evaluated using one year of real ESS operational data and corresponding meteorological data sampled at 15-min intervals. Performance is assessed in terms of convergence speed, learning stability, and cooling-energy consumption. The experimental results show that the DQN algorithm reduces time-averaged cooling power consumption by 46.5% compared to conventional rule-based control, while maintaining temperature, humidity, and dew-point constraint violation rates below 1% throughout the testing period. Among the policy-based methods, the Policy Gradient algorithm demonstrates competitive energy-saving performance but requires longer training time and exhibits higher reward variance. These findings confirm that RL-based control can effectively adapt to dynamic environmental conditions, thereby improving both energy efficiency and operational safety in ESS battery rooms. The proposed framework offers a practical and scalable solution for intelligent thermal management in ESS facilities. Full article
Show Figures

Figure 1

20 pages, 3262 KB  
Article
Glass Fall-Offs Detection for Glass Insulated Terminals via a Coarse-to-Fine Machine-Learning Framework
by Weibo Li, Bingxun Zeng, Weibin Li, Nian Cai, Yinghong Zhou, Shuai Zhou and Hao Xia
Micromachines 2026, 17(1), 128; https://doi.org/10.3390/mi17010128 - 19 Jan 2026
Abstract
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light [...] Read more.
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light reflection, irregular defect appearances, and limited defective samples. To address these issues, a coarse-to-fine machine-learning framework is proposed for glass fall-off detection in GIT images. By exploiting the circular-ring geometric prior of GITs, an adaptive sector partition scheme is introduced to divide the region of interest into sectors. Four categories of sector features, including color statistics, gray-level variations, reflective properties, and gradient distributions, are designed for coarse classification using a gradient boosting decision tree (GBDT). Furthermore, a sector neighbor (SN) feature vector is constructed from adjacent sectors to enhance fine classification. Experiments on real industrial GIT images show that the proposed method outperforms several representative inspection approaches, achieving an average IoU of 96.85%, an F1-score of 0.984, a pixel-level false alarm rate of 0.55%, and a pixel-level missed alarm rate of 35.62% at a practical inspection speed of 32.18 s per image. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

20 pages, 434 KB  
Article
Hausdorff Difference-Based Adam Optimizer for Image Classification
by Jing Jian, Zhe Gao and Haibin Zhang
Mathematics 2026, 14(2), 329; https://doi.org/10.3390/math14020329 - 19 Jan 2026
Abstract
To address the limitations of fixed-order update mechanisms in convolutional neural network parameter training, an adaptive parameter training method based on the Hausdorff difference is proposed in this paper. By deriving a Hausdorff difference formula that is suitable for discrete training processes and [...] Read more.
To address the limitations of fixed-order update mechanisms in convolutional neural network parameter training, an adaptive parameter training method based on the Hausdorff difference is proposed in this paper. By deriving a Hausdorff difference formula that is suitable for discrete training processes and embedding it into the adaptive moment estimation framework, a generalized Hausdorff difference-based Adam algorithm (HAdam) is constructed. This algorithm introduces an order parameter to achieve joint dynamic control of the momentum intensity and the effective learning rate. Through theoretical analysis and numerical simulations, the influence of the order parameter and its value range on algorithm stability, parameter evolution trajectories, and convergence speed is investigated, and two adaptive order adjustment strategies based on iteration cycles and gradient feedback are designed. The experimental results on the Fashion-MNIST and CIFAR-10 benchmark datasets show that, compared with the standard Adam algorithm, the HAdam algorithm exhibits clear advantages in both convergence efficiency and recognition accuracy. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

28 pages, 12374 KB  
Article
A Distributed Instance Selection Algorithm Based on Cognitive Reasoning for Regression Tasks
by Linzi Yin, Wendi Cai, Zhanqi Li and Xiaochao Hou
Appl. Sci. 2026, 16(2), 913; https://doi.org/10.3390/app16020913 - 15 Jan 2026
Viewed by 80
Abstract
Instance selection is a critical preprocessing technique for enhancing data quality and improving machine learning model efficiency. However, existing algorithms for regression tasks face a fundamental trade-off: non-heuristic methods offer high precision but suffer from sequential dependencies that hinder parallelization, while heuristic methods [...] Read more.
Instance selection is a critical preprocessing technique for enhancing data quality and improving machine learning model efficiency. However, existing algorithms for regression tasks face a fundamental trade-off: non-heuristic methods offer high precision but suffer from sequential dependencies that hinder parallelization, while heuristic methods support parallelization but often yield coarse-grained results susceptible to local optima. To address these challenges, we propose CRDISA, a novel distributed instance selection algorithm driven by a formalized cognitive reasoning logic. Unlike traditional approaches that evaluate subsets, CRDISA transforms each instance into an independent “Instance Expert” capable of reasoning about the global data distribution through a unique difference knowledge base. For regression tasks with continuous outputs, we introduce a soft partitioning strategy to define adaptive error boundaries and a bidirectional voting mechanism to robustly identify high-quality instances. Although the fine-grained reasoning implies high computational complexity, we implement CRDISA on Apache Spark using an optimized broadcast mechanism. This architecture provides linear scalability in wall-clock time, enabling scalable processing without sacrificing theoretical rigor. Experiments on 22 datasets demonstrate that CRDISA achieves an average compression rate of 31.7% while maintaining predictive accuracy (R2=0.681) comparable to or better than state-of-the-art methods, proving its superiority in balancing selection granularity and distributed efficiency. Full article
(This article belongs to the Special Issue Big Data Driven Machine Learning and Deep Learning)
Show Figures

Figure 1

25 pages, 2650 KB  
Article
Energy Saving Potential and Machine Learning-Based Prediction of Compressed Air Leakages in Sustainable Manufacturing
by Sinan Kapan
Sustainability 2026, 18(2), 904; https://doi.org/10.3390/su18020904 - 15 Jan 2026
Viewed by 137
Abstract
Compressed air systems are widely used in industry, and air leaks that occur over time lead to significant and unnecessary energy losses. This study aims to quantify the energy-saving potential of compressed air leaks in a manufacturing plant and to develop machine learning [...] Read more.
Compressed air systems are widely used in industry, and air leaks that occur over time lead to significant and unnecessary energy losses. This study aims to quantify the energy-saving potential of compressed air leaks in a manufacturing plant and to develop machine learning (ML) regression models for sustainable leak management. A total of 230 leak points were identified by measuring three periods using an ultrasonic device. Using the measured acoustic emission level (dB) and probe distance (x) as inputs, the leak flow rate, annual energy-saving potential, cost loss, and carbon footprint were calculated. As a result of the repairs, energy consumption improved by 8% compared to the initial state. Three regression models were compared to predict leak flow: Linear Regression, Bagging Regression Trees, and Multivariate Adaptive Regression Splines. Among the models evaluated, the Bagging Regression Trees model demonstrated the best prediction performance, achieving an R2 value of 0.846, a mean squared error (MSE) of 389.85 (L/min2), and a mean absolute error (MAE) of 12.13 L/min in the independent test set. Compared to previous regression-based approaches, the proposed ML method contributes to sustainable production strategies by linking leakage prediction to energy performance indicators. Full article
Show Figures

Figure 1

26 pages, 10192 KB  
Article
Multi-Robot Task Allocation with Spatiotemporal Constraints via Edge-Enhanced Attention Networks
by Yixiang Hu, Daxue Liu, Jinhong Li, Junxiang Li and Tao Wu
Appl. Sci. 2026, 16(2), 904; https://doi.org/10.3390/app16020904 - 15 Jan 2026
Viewed by 92
Abstract
Multi-Robot Task Allocation (MRTA) with spatiotemporal constraints presents significant challenges in environmental adaptability. Existing learning-based methods often overlook environmental spatial constraints, leading to spatial information distortion. To address this, we formulate the problem as an asynchronous Markov Decision Process over a directed heterogeneous [...] Read more.
Multi-Robot Task Allocation (MRTA) with spatiotemporal constraints presents significant challenges in environmental adaptability. Existing learning-based methods often overlook environmental spatial constraints, leading to spatial information distortion. To address this, we formulate the problem as an asynchronous Markov Decision Process over a directed heterogeneous graph and propose a novel heterogeneous graph neural network named the Edge-Enhanced Attention Network (E2AN). This network integrates a specialized encoder, the Edge-Enhanced Heterogeneous Graph Attention Network (E2HGAT), with an attention-based decoder. By incorporating edge attributes to effectively characterize path costs under spatial constraints, E2HGAT corrects spatial distortion. Furthermore, our approach supports flexible extension to diverse payload scenarios via node attribute adaptation. Extensive experiments conducted in simulated environments with obstructed maps demonstrate that the proposed method outperforms baseline algorithms in task success rate. Remarkably, the model maintains its advantages in generalization tests on unseen maps as well as in scalability tests across varying problem sizes. Ablation studies further validate the critical role of the proposed encoder in capturing spatiotemporal dependencies. Additionally, real-time performance analysis confirms the method’s feasibility for online deployment. Overall, this study offers an effective solution for MRTA problems with complex constraints. Full article
(This article belongs to the Special Issue Motion Control for Robots and Automation)
Show Figures

Figure 1

27 pages, 409 KB  
Article
Adaptive e-Learning for Number Theory: A Mixed Methods Evaluation of Usability, Perceived Learning Outcomes, and Engagement
by Péter Négyesi, Ilona Oláhné Téglási, Tünde Lengyelné Molnár and Réka Racsko
Educ. Sci. 2026, 16(1), 127; https://doi.org/10.3390/educsci16010127 - 14 Jan 2026
Viewed by 147
Abstract
This study developed and evaluated an adaptive e-learning environment for selected number theory topics using a mixed-methods research design, conducted over an eleven-month period across secondary and early tertiary education contexts. The evaluation focused on three primary outcome domains: (1) learning-related outcomes (problem-solving [...] Read more.
This study developed and evaluated an adaptive e-learning environment for selected number theory topics using a mixed-methods research design, conducted over an eleven-month period across secondary and early tertiary education contexts. The evaluation focused on three primary outcome domains: (1) learning-related outcomes (problem-solving accuracy and task success rate), (2) learner engagement and activity indicators (daily logins and tasks completed per day), and (3) system usability, assessed according to Jakob Nielsen’s usability dimensions. Quantitative data were collected through student and teacher questionnaires (N = 264 students; N = 52 teachers) and large-scale logfile analytics comprising more than 825,000 recorded system interactions. Qualitative feedback from students and teachers complemented the quantitative analyses. The results indicate statistically significant increases in learner activity, task completion rates, and problem-solving success following the introduction of the adaptive system, as demonstrated by inferential statistical analyses with confidence intervals. Post-use evaluations further indicated high levels of learner motivation and self-confidence, along with positive perceptions of system usability. Teachers evaluated the system positively in terms of learnability, efficiency, and instructional integration. Logfile analyses also revealed sustained growth in daily engagement and task success over time. Overall, the findings suggest that adaptive e-learning environments can effectively support engagement, usability, and learning-related performance in number theory education, although further research is required to examine the sustainability of learning-related outcomes over extended periods and to further refine error-handling mechanisms. Full article
Show Figures

Figure 1

17 pages, 3529 KB  
Article
Study on Multimodal Sensor Fusion for Heart Rate Estimation Using BCG and PPG Signals
by Jisheng Xing, Xin Fang, Jing Bai, Luyao Cui, Feng Zhang and Yu Xu
Sensors 2026, 26(2), 548; https://doi.org/10.3390/s26020548 - 14 Jan 2026
Viewed by 156
Abstract
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features [...] Read more.
Continuous heart rate monitoring is crucial for early cardiovascular disease detection. To overcome the discomfort and limitations of ECG in home settings, we propose a multimodal temporal fusion network (MM-TFNet) that integrates ballistocardiography (BCG) and photoplethysmography (PPG) signals. The network extracts temporal features from BCG and PPG signals through temporal convolutional networks (TCNs) and bidirectional long short-term memory networks (BiLSTMs), respectively, achieving cross-modal dynamic fusion at the feature level. First, bimodal features are projected into a unified dimensional space through fully connected layers. Subsequently, a cross-modal attention weight matrix is constructed for adaptive learning of the complementary correlation between BCG mechanical vibration and PPG volumetric flow features. Combined with dynamic focusing on key heartbeat waveforms through multi-head self-attention (MHSA), the model’s robustness under dynamic activity states is significantly enhanced. Experimental validation using a publicly available BCG-PPG-ECG simultaneous acquisition dataset comprising 40 subjects demonstrates that the model achieves excellent performance with a mean absolute error (MAE) of 0.88 BPM in heart rate prediction tasks, outperforming current mainstream deep learning methods. This study provides theoretical foundations and engineering guidance for developing contactless, low-power, edge-deployable home health monitoring systems, demonstrating the broad application potential of multimodal fusion methods in complex physiological signal analysis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 3746 KB  
Article
Fault Diagnosis and Classification of Rolling Bearings Using ICEEMDAN–CNN–BiLSTM and Acoustic Emission
by Jinliang Li, Haoran Sheng, Bin Liu and Xuewei Liu
Sensors 2026, 26(2), 507; https://doi.org/10.3390/s26020507 - 12 Jan 2026
Viewed by 222
Abstract
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble [...] Read more.
Reliable operation of rolling bearings is essential for mechanical systems. Acoustic emission (AE) offers a promising approach for bearing fault detection because of its high-frequency response and strong noise-suppression capability. This study proposes an intelligent diagnostic method that combines an improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) and a convolutional neural network–bidirectional long short-term memory (CNN–BiLSTM) architecture. The method first applies wavelet denoising to AE signals, then uses ICEEMDAN decomposition followed by kurtosis-based screening to extract key fault components and construct feature vectors. Subsequently, a CNN automatically learns deep time–frequency features, and a BiLSTM captures temporal dependencies among these features, enabling end-to-end fault identification. Experiments were conducted on a bearing acoustic emission dataset comprising 15 operating conditions, five fault types, and three rotational speeds; comparative model tests were also performed. Results indicate that ICEEMDAN effectively suppresses mode mixing (average mixing rate 6.08%), and the proposed model attained an average test-set recognition accuracy of 98.00%, significantly outperforming comparative models. Moreover, the model maintained 96.67% accuracy on an independent validation set, demonstrating strong generalization and practical application potential. Full article
(This article belongs to the Special Issue Deep Learning Based Intelligent Fault Diagnosis)
Show Figures

Figure 1

22 pages, 7325 KB  
Review
Adaptive Virtual Synchronous Generator Control Using a Backpropagation Neural Network with Enhanced Stability
by Hanzhong Chen, Huangqing Xiao, Kai Gong, Zhengjian Chen and Wenqiao Qiang
Electronics 2026, 15(2), 333; https://doi.org/10.3390/electronics15020333 - 12 Jan 2026
Viewed by 95
Abstract
To enhance grid stability with high renewable energy penetration, this paper proposes an adaptive virtual synchronous generator (VSG) control using a backpropagation neural network (BPNN). Traditional VSG control methods exhibit limitations in handling nonlinear dynamics and suppressing power oscillations. Distinguishing from existing studies [...] Read more.
To enhance grid stability with high renewable energy penetration, this paper proposes an adaptive virtual synchronous generator (VSG) control using a backpropagation neural network (BPNN). Traditional VSG control methods exhibit limitations in handling nonlinear dynamics and suppressing power oscillations. Distinguishing from existing studies that apply BPNN solely for damping adjustment, this paper proposes a novel strategy where BPNN simultaneously regulates both VSG virtual inertia and damping coefficients by learning nonlinear relationships among inertia, angular velocity deviation, and its rate of change. A key innovation is redesigning the error function to minimize angular acceleration changes rather than frequency deviations, aligning with rotational inertia’s physical role and preventing excessive adjustments. Additionally, an adaptive damping coefficient is introduced based on optimal damping ratio principles to further suppress power oscillations. Simulation under load disturbances and grid frequency perturbations demonstrates that the proposed BPNN strategy significantly outperforms constant inertia, bang–bang, and radial basis function neural network methods. Full article
(This article belongs to the Section Industrial Electronics)
Show Figures

Figure 1

41 pages, 22326 KB  
Article
Comparative Study on Multi-Objective Optimization Design Patterns for High-Rise Residences in Northwest China Based on Climate Differences
by Teng Shao, Kun Zhang, Yanna Fang, Adila Nijiati and Wuxing Zheng
Buildings 2026, 16(2), 298; https://doi.org/10.3390/buildings16020298 - 10 Jan 2026
Viewed by 151
Abstract
As China’s urbanization rate continues to rise, the scale of high-rise residences also grows, emerging as one of the main sources of building energy consumption and carbon emissions. It is therefore crucial to conduct energy-efficient design tailored to local climate and resource endowments [...] Read more.
As China’s urbanization rate continues to rise, the scale of high-rise residences also grows, emerging as one of the main sources of building energy consumption and carbon emissions. It is therefore crucial to conduct energy-efficient design tailored to local climate and resource endowments during the schematic design phase. At the same time, consideration should also be given to its impact on economic efficiency and environmental comfort, so as to achieve synergistic optimization of energy, carbon emissions, and economic and environmental performance. This paper focuses on typical high-rise residences in three cities across China’s northwestern region, each with distinct climatic conditions and solar energy resources. The optimization objectives include building energy consumption intensity (BEI), useful daylight illuminance (UDI), life cycle carbon emissions (LCCO2), and life cycle cost (LCC). The optimization variables include 13 design parameters: building orientation, window–wall ratio, horizontal overhang sun visor length, bedroom width and depth, insulation layer thickness of the non-transparent building envelope, and window type. First, a parametric model of a high-rise residence was created on the Rhino–Grasshopper platform. Through LHS sample extraction, performance simulation, and calculation, a sample dataset was generated that included objective values and design parameter values. Secondly, an SVM prediction model was constructed based on the sample data, which was used as the fitness function of MOPSO to construct a multi-objective optimization model for high-rise residences in different cities. Through iterative operations, the Pareto optimal solution set was obtained, followed by an analysis of the optimization potential of objective performances and the sensitivity of design parameters across different cities. Furthermore, the TOPSIS multi-attribute decision-making method was adopted to screen optimal design patterns for high-rise residences that meet different requirements. After verifying the objective balance of the comprehensive optimal design patterns, the influence of climate differences on objective values and design parameter values was explored, and parametric models of the final design schemes were generated. The results indicate that differences in climatic conditions and solar energy resources can affect the optimal objective values and design variable settings for typical high-rise residences. This paper proposes a building optimization design framework that integrates parametric design, machine learning, and multi-objective optimization, and that explores the impact of climate differences on optimization results, providing a reference for determining design parameters for climate-adaptive high-rise residences. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

42 pages, 3251 KB  
Article
Efficient and Accurate Epilepsy Seizure Prediction and Detection Based on Multi-Teacher Knowledge Distillation RGF-Model
by Wei Cao, Qi Li, Anyuan Zhang and Tianze Wang
Brain Sci. 2026, 16(1), 83; https://doi.org/10.3390/brainsci16010083 - 9 Jan 2026
Viewed by 305
Abstract
Background: Epileptic seizures are unpredictable, and while existing deep learning models achieve high accuracy, their deployment on wearable devices is constrained by high computational costs and latency. To address this, this work proposes the RGF-Model, a lightweight network that unifies seizure prediction and [...] Read more.
Background: Epileptic seizures are unpredictable, and while existing deep learning models achieve high accuracy, their deployment on wearable devices is constrained by high computational costs and latency. To address this, this work proposes the RGF-Model, a lightweight network that unifies seizure prediction and detection within a single causal framework. Methods: By integrating Feature-wise Linear Modulation (FiLM) with a Ring-Buffer Gated Recurrent Unit (Ring-GRU), the model achieves adaptive task-specific feature conditioning while strictly enforcing causal consistency for real-time inference. A multi-teacher knowledge distillation strategy is employed to transfer complementary knowledge from complex teacher ensembles to the lightweight student, significantly reducing complexity without sacrificing accuracy. Results: Evaluations on the CHB-MIT and Siena datasets demonstrate that the RGF-Model outperforms state-of-the-art teacher models in terms of efficiency while maintaining comparable accuracy. Specifically, on CHB-MIT, it achieves 99.54% Area Under the Curve (AUC) and 0.01 False Prediction Rate per hour (FPR/h) for prediction, and 98.78% Accuracy (Acc) for detection, with only 0.082 million parameters. Statistical significance was assessed using a random predictor baseline (p < 0.05). Conclusions: The results indicate that the RGF-Model provides a highly efficient solution for real-time wearable epilepsy monitoring. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

16 pages, 5230 KB  
Article
A Novel Hybrid Model for Groundwater Vulnerability Assessment and Its Application in a Coastal City
by Yanwei Wang, Haokun Yu, Zongzhong Song, Jingrui Wang and Qingguo Song
Sustainability 2026, 18(2), 674; https://doi.org/10.3390/su18020674 - 9 Jan 2026
Viewed by 184
Abstract
Groundwater vulnerability assessments serve as essential tools for sustainable groundwater management, particularly in regions with intensive anthropogenic activities. However, improving the objectivity and predictive reliability of vulnerability assessment frameworks remains a critical scientific challenge in groundwater science, especially for coastal aquifer systems characterized [...] Read more.
Groundwater vulnerability assessments serve as essential tools for sustainable groundwater management, particularly in regions with intensive anthropogenic activities. However, improving the objectivity and predictive reliability of vulnerability assessment frameworks remains a critical scientific challenge in groundwater science, especially for coastal aquifer systems characterized by strong heterogeneity and complex hydrogeological processes. The traditional DRASTIC model is a widely recognized method but suffers from subjectivity in assigning parameter ratings and weights, often leading to arbitrary and potentially inaccurate vulnerability maps. This limitation also restricts its applicability in areas with complex hydrogeological conditions. To enhance the accuracy and adaptability of the traditional DRASTIC model, a hybrid PSO-BP-DRASTIC framework was developed and applied it to a coastal city in China. Specifically, the model employs a backpropagation neural network (BP-NN) to optimize indicator weights and integrates the particle swarm optimization (PSO) algorithm to refine the initial weights and thresholds of the BP-NN. By introducing a data-driven and globally optimized weighting mechanism, the proposed framework effectively overcomes the inherent subjectivity of conventional empirical weighting schemes. Using ten-fold cross-validation and observed nitrate concentration data, the traditional DRASTIC, BP-DRASTIC, and PSO-BP-DRASTIC models were systematically validated and compared. The results demonstrate that (1) the PSO-BP-DRASTIC model achieved the highest classification accuracy on the test set, the highest stability across ten-fold cross-validation, and the strongest correlation with the nitrate concentrations; (2) the importance analysis identified the aquifer thickness and depth to the groundwater table as the most influential factors affecting groundwater vulnerability in Yantai; and (3) the spatial assessments revealed that high-vulnerability zones (7.85% of the total area) are primarily located in regions with intensive agricultural activities and high aquifer permeability. The hybrid PSO-BP-DRASTIC model effectively mitigates the subjectivity of the traditional DRASTIC method and the local optimum issues inherent in BP-NNs, significantly improving the assessment accuracy, stability, and objectivity. From a scientific perspective, this study demonstrates the feasibility of integrating swarm intelligence and neural learning into groundwater vulnerability assessment, providing a transferable and high-precision methodological paradigm for data-driven hydrogeological risk evaluation. This novel hybrid model provides a reliable scientific basis for the reasonable assessment of groundwater vulnerability. Moreover, these findings highlight the importance of integrating a hybrid optimization strategy into the traditional DRASTIC model to enhance its feasibility in coastal cities and other regions with complex hydrogeological conditions. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

26 pages, 3077 KB  
Article
Coordinated Scheduling of BESS–ASHP Systems in Zero-Energy Houses Using Multi-Agent Reinforcement Learning
by Jing Li, Yang Xu, Yunqin Lu and Weijun Gao
Buildings 2026, 16(2), 274; https://doi.org/10.3390/buildings16020274 - 8 Jan 2026
Viewed by 193
Abstract
This paper addresses the critical challenge of multi-objective optimization in residential Home Energy Management Systems (HEMS) by proposing a novel framework based on an Improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm. The study specifically targets the low convergence efficiency of Multi-Agent Deep Reinforcement [...] Read more.
This paper addresses the critical challenge of multi-objective optimization in residential Home Energy Management Systems (HEMS) by proposing a novel framework based on an Improved Multi-Agent Proximal Policy Optimization (MAPPO) algorithm. The study specifically targets the low convergence efficiency of Multi-Agent Deep Reinforcement Learning (MADRL) for coupled Battery Energy Storage System (BESS) and Air Source Heat Pump (ASHP) operation. The framework synergistically integrates an action constraint projection mechanism with an economic-performance-driven dynamic learning rate modulation strategy, thereby significantly enhancing learning stability. Simulation results demonstrate that the algorithm improves training convergence speed by 35–45% compared to standard MAPPO. Economically, it delivers a cumulative cost reduction of 15.77% against rule-based baselines, outperforming both Independent Proximal Policy Optimization (IPPO) and standard MAPPO benchmarks. Furthermore, the method maximizes renewable energy utilization, achieving nearly 100% photovoltaic self-consumption under favorable conditions while ensuring robustness in extreme scenarios. Temporal analysis reveals the agents’ capacity for anticipatory decision-making, effectively learning correlations among generation, pricing, and demand to achieve seamless seasonal adaptability. These findings validate the superior performance of the proposed centralized training architecture, providing a robust solution for complex residential energy management. Full article
Show Figures

Figure 1

22 pages, 312 KB  
Article
Machine Learning-Enhanced Database Cache Management: A Comprehensive Performance Analysis and Comparison of Predictive Replacement Policies
by Maryam Abbasi, Paulo Váz, José Silva, Filipe Cardoso, Filipe Sá and Pedro Martins
Appl. Sci. 2026, 16(2), 666; https://doi.org/10.3390/app16020666 - 8 Jan 2026
Viewed by 200
Abstract
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture [...] Read more.
The exponential growth of data-driven applications has intensified performance demands on database systems, where cache management represents a critical bottleneck. Traditional cache replacement policies such as Least Recently Used (LRU) and Least Frequently Used (LFU) rely on simple heuristics that fail to capture complex temporal and frequency patterns in modern workloads. This research presents a modular machine learning-enhanced cache management framework that leverages pattern recognition to optimize database performance through intelligent replacement decisions. Our approach integrates multiple machine learning models—Random Forest classifiers, Long Short-Term Memory (LSTM) networks, Support Vector Machines (SVM), and Gradient Boosting methods—within a modular architecture enabling seamless integration with existing database systems. The framework incorporates sophisticated feature engineering pipelines extracting temporal, frequency, and contextual characteristics from query access patterns. Comprehensive experimental evaluation across synthetic workloads, real-world production datasets, and standard benchmarks (TPC-C, TPC-H, YCSB, and LinkBench) demonstrates consistent performance improvements. Machine learning-enhanced approaches achieve 8.4% to 19.2% improvement in cache hit rates, 15.3% to 28.7% reduction in query latency, and 18.9% to 31.4% increase in system throughput compared to traditional policies and advanced adaptive methods including ARC, LIRS, Clock-Pro, TinyLFU, and LECAR. Random Forest emerges as the most practical solution, providing 18.7% performance improvement with only 3.1% computational overhead. Case study analysis across e-commerce, financial services, and content management applications demonstrates measurable business impact, including 8.3% conversion rate improvements and USD 127,000 annual revenue increases. Statistical validation (p<0.001, Cohen’s d>0.8) confirms both statistical and practical significance. Full article
Show Figures

Figure 1

Back to TopTop