Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,850)

Search Parameters:
Keywords = adaptive learning rate

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 2659 KB  
Article
ShieldNet: A Novel Adversarially Resilient Convolutional Neural Network for Robust Image Classification
by Arslan Manzoor, Georgia Fargetta, Alessandro Ortis and Sebastiano Battiato
Appl. Sci. 2026, 16(3), 1254; https://doi.org/10.3390/app16031254 - 26 Jan 2026
Abstract
The proliferation of biometric authentication systems in critical security applications has highlighted the urgent need for robust defense mechanisms against sophisticated adversarial attacks. This paper presents ShieldNet, an adversarially resilient Convolutional Neural Network (CNN) framework specifically designed for secure iris biometric authentication. Unlike [...] Read more.
The proliferation of biometric authentication systems in critical security applications has highlighted the urgent need for robust defense mechanisms against sophisticated adversarial attacks. This paper presents ShieldNet, an adversarially resilient Convolutional Neural Network (CNN) framework specifically designed for secure iris biometric authentication. Unlike existing approaches that apply adversarial training or gradient regularization independently, ShieldNet introduces a synergistic dual-layer defense framework featuring three key components: (1) an attack-aware adaptive weighting mechanism that dynamically balances defense priorities across multiple attack types, (2) a smoothness-regularized gradient penalty formulation that maintains differentiable gradients while encouraging locally smooth loss landscapes, and (3) a consistency loss component that enforces prediction stability between clean and adversarial inputs. Through extensive experimental validation across three diverse iris datasets, MMU1, CASIA-Iris-Africa, and UBIRIS.v2, and rigorous evaluation against strong adaptive attacks including AutoAttack, PGD-100 with random restarts, and transfer-based black-box attacks, ShieldNet demonstrated robust performance, achieving 87.3% adversarial accuracy under AutoAttack on MMU1, 85.1% on CASIA-Iris-Africa, and 82.4% on UBIRIS.v2, while maintaining competitive clean data accuracies of 94.7%, 93.9%, and 92.8%, respectively. The proposed framework outperforms existing state-of-the-art defense methods including TRADES, MART, and AWP, achieving an equal error rate (EER) as low as 2.8% and demonstrating consistent robustness across both gradient-based and gradient-free attack scenarios. Comprehensive ablation studies validate the complementary contributions of each defense component, while latent space analysis confirms that ShieldNet learns genuinely robust feature representations rather than relying on gradient obfuscation. These results establish ShieldNet as a practical and reliable solution for deployment in high-security biometric authentication environments. Full article
Show Figures

Figure 1

28 pages, 3390 KB  
Article
Enhancing Multi-Agent Reinforcement Learning via Knowledge-Embedded Modular Framework for Online Basketball Games
by Junhyuk Kim, Jisun Park and Kyungeun Cho
Mathematics 2026, 14(3), 419; https://doi.org/10.3390/math14030419 - 25 Jan 2026
Abstract
High sample complexity presents a major challenge in applying multi-agent reinforcement learning (MARL) to dynamic, high-dimensional sports such as basketball. To address this problem, we proposed the knowledge-embedded modular framework (KEMF), which partitions the environment into offense, defense, and loose-ball modules. Each module [...] Read more.
High sample complexity presents a major challenge in applying multi-agent reinforcement learning (MARL) to dynamic, high-dimensional sports such as basketball. To address this problem, we proposed the knowledge-embedded modular framework (KEMF), which partitions the environment into offense, defense, and loose-ball modules. Each module employs specialized policies and a knowledge-based observation layer enriched with basketball-specific metrics such as shooting success and defensive accuracy. These metrics are also incorporated into a dynamic and dense reward scheme that offers more direct and situation-specific feedback than sparse win/loss signals. We integrated these components into a multi-agent proximal policy optimization (MAPPO) algorithm to enhance training speed and improve sample efficiency. Evaluations using the commercial basketball game Freestyle indicate that KEMF outperformed previous methods in terms of the average points, winning rate, and overall training efficiency. An ablation study confirmed the synergistic effects of modularity, knowledge-embedded observations, and dense rewards. Moreover, a real-world deployment in 1457 live matches demonstrated the robustness of the framework, with trained agents achieving a 52.43% win rate against experienced human players. These results underscore the promise of the KEMF to enable efficient, adaptive, and strategically coherent MARL solutions in complex sporting environments. Full article
(This article belongs to the Special Issue Applications of Intelligent Game and Reinforcement Learning)
23 pages, 3554 KB  
Article
Hybrid Mechanism–Data-Driven Modeling for Crystal Quality Prediction in Czochralski Process
by Duqiao Zhao, Junchao Ren, Xiaoyan Du, Yixin Wang and Dong Ding
Crystals 2026, 16(2), 86; https://doi.org/10.3390/cryst16020086 (registering DOI) - 25 Jan 2026
Abstract
The V/G criterion is a critical indicator for monitoring dynamic changes during Czochralski silicon single crystal (Cz-SSC) growth. However, the inability to measure it in real time forces reliance on offline feedback for process regulation, leading to imprecise control and compromised crystal quality. [...] Read more.
The V/G criterion is a critical indicator for monitoring dynamic changes during Czochralski silicon single crystal (Cz-SSC) growth. However, the inability to measure it in real time forces reliance on offline feedback for process regulation, leading to imprecise control and compromised crystal quality. To overcome this limitation, this paper proposes a novel soft sensor modeling framework that integrates both mechanism-based knowledge and data-driven learning for the real-time prediction of the crystal quality parameter, specifically the V/G value (the ratio of growth rate to axial temperature gradient). The proposed approach constructs a hybrid prediction model by combining a data-driven sub-model with a physics-informed mechanism sub-model. The data-driven component is developed using an attention-based dynamic stacked enhanced autoencoder (AD-SEAE) network, where the SEAE structure introduces layer-wise reconstruction operations to mitigate information loss during hierarchical feature extraction. Furthermore, an attention mechanism is incorporated to dynamically weigh historical and current samples, thereby enhancing the temporal representation of process dynamics. In addition, a robust ensemble approach is achieved by fusing the outputs of two subsidiary models using an adaptive weighting strategy based on prediction accuracy, thereby enabling more reliable V/G predictions under varying operational conditions. Experimental validation using actual industrial Cz-SSC production data demonstrates that the proposed method achieves high-prediction accuracy and effectively supports real-time process optimization and quality monitoring. Full article
(This article belongs to the Section Industrial Crystallization)
Show Figures

Figure 1

20 pages, 1978 KB  
Article
UAV-Based Forest Fire Early Warning and Intervention Simulation System with High-Accuracy Hybrid AI Model
by Muhammet Sinan Başarslan and Hikmet Canlı
Appl. Sci. 2026, 16(3), 1201; https://doi.org/10.3390/app16031201 - 23 Jan 2026
Viewed by 155
Abstract
In this study, a hybrid deep learning model that combines the VGG16 and ResNet101V2 architectures is proposed for image-based fire detection. In addition, a balanced drone guidance algorithm is developed to efficiently assign tasks to available UAVs. In the fire detection phase, the [...] Read more.
In this study, a hybrid deep learning model that combines the VGG16 and ResNet101V2 architectures is proposed for image-based fire detection. In addition, a balanced drone guidance algorithm is developed to efficiently assign tasks to available UAVs. In the fire detection phase, the hybrid model created by combining the VGG16 and ResNet101V2 architectures has been optimized with Global Average Pooling and layer merging techniques to increase classification success. The DeepFire dataset was used throughout the training process, achieving an extremely high accuracy rate of 99.72% and 100% precision. After fire detection, a task assignment algorithm was developed to assign existing drones to fire points at minimum cost and with balanced load distribution. This algorithm performs task assignments using the Hungarian (Kuhn–Munkres) method and cost optimization, and is adapted to direct approximately equal numbers of drones to each fire when the number of fires is less than the number of drones. The developed system was tested in a Python-based simulation environment and evaluated using performance metrics such as total intervention time, energy consumption, and task balance. The results demonstrate that the proposed hybrid model provides highly accurate fire detection and that the task assignment system creates balanced and efficient intervention scenarios. Full article
Show Figures

Figure 1

21 pages, 1463 KB  
Article
A Mathematical Framework for E-Commerce Sales Prediction Using Attention-Enhanced BiLSTM and Bayesian Optimization
by Hao Hu, Jinshun Cai and Chenke Xu
Math. Comput. Appl. 2026, 31(1), 17; https://doi.org/10.3390/mca31010017 - 22 Jan 2026
Viewed by 29
Abstract
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. [...] Read more.
Accurate sales prediction is crucial for inventory and marketing in e-commerce. Cross-border sales involve complex patterns that traditional models cannot capture. To address this, we propose an improved Bidirectional Long Short-Term Memory (BiLSTM) model, enhanced with an attention mechanism and Bayesian hyperparameter optimization. The attention mechanism focuses on key temporal features, improving trend identification. The BiLSTM captures both forward and backward dependencies, offering deeper insights into sales patterns. Bayesian optimization fine-tunes hyperparameters such as learning rate, hidden-layer size, and dropout rate to achieve optimal performance. These innovations together improve forecasting accuracy, making the model more adaptable and efficient for cross-border e-commerce sales. Experimental results show that the model achieves an Root Mean Square Error (RMSE) of 13.2, Mean Absolute Error (MAE) of 10.2, Mean Absolute Percentage Error (MAPE) of 8.7 percent, and a Coefficient of Determination (R2) of 0.92. It outperforms baseline models, including BiLSTM (RMSE 16.5, MAPE 10.9 percent), BiLSTM with Attention (RMSE 15.2, MAPE 10.1 percent), Temporal Convolutional Network (RMSE 15.0, MAPE 9.8 percent), and Transformer for Time Series (RMSE 14.8, MAPE 9.5 percent). These results highlight the model’s superior performance in forecasting cross-border e-commerce sales, making it a valuable tool for inventory management and demand planning. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
Show Figures

Figure 1

31 pages, 3765 KB  
Article
Rain Detection in Solar Insecticidal Lamp IoTs Systems Based on Multivariate Wireless Signal Feature Learning
by Lingxun Liu, Lei Shu, Yiling Xu, Kailiang Li, Ru Han, Qin Su and Jiarui Fang
Electronics 2026, 15(2), 465; https://doi.org/10.3390/electronics15020465 - 21 Jan 2026
Viewed by 63
Abstract
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This [...] Read more.
Solar insecticidal lamp Internet of Things (SIL-IoTs) systems are widely deployed in agricultural environments, where accurate and timely rain-detection is crucial for system stability and energy-efficient operation. However, existing rain-sensing solutions rely on additional hardware, leading to increased cost and maintenance complexity. This study proposes a hardware-free rain detection method based on multivariate wireless signal feature learning, using LTE communication data. A large-scale primary dataset containing 11.84 million valid samples was collected from a real farmland SIL-IoTs deployment in Nanjing, recording RSRP, RSRQ, and RSSI at 1 Hz. To address signal heterogeneity, a signal-strength stratification strategy and a dual-rate EWMA-based adaptive signal-leveling mechanism were introduced. Four machine-learning models—Logistic Regression, Random Forest, XGBoost, and LightGBM—were trained and evaluated using both the primary dataset and an external test dataset collected in Changsha and Dongguan. Experimental results show that XGBoost achieves the highest detection accuracy, whereas LightGBM provides a favorable trade-off between performance and computational cost. Evaluation using accuracy, precision, recall, F1-score, and ROC-AUC indicates that all metrics exceed 0.975. The proposed method demonstrates strong accuracy, robustness, and cross-regional generalization, providing a practical and scalable solution for rain detection in agricultural IoT systems without additional sensing hardware. Full article
Show Figures

Figure 1

32 pages, 2929 KB  
Article
Policy Plateau and Structural Regime Shift: Hybrid Forecasting of the EU Decarbonisation Gap Toward 2030 Targets
by Oksana Liashenko, Kostiantyn Pavlov, Olena Pavlova, Olga Demianiuk, Robert Chmura, Bożena Sowa and Tetiana Vlasenko
Sustainability 2026, 18(2), 1114; https://doi.org/10.3390/su18021114 - 21 Jan 2026
Viewed by 92
Abstract
This study investigates the structural evolution and projected trajectory of greenhouse gas (GHG) emissions across the EU27 from 1990 to 2030, with a particular focus on their implications for the effectiveness of European climate policy. Drawing on official sectoral data and employing a [...] Read more.
This study investigates the structural evolution and projected trajectory of greenhouse gas (GHG) emissions across the EU27 from 1990 to 2030, with a particular focus on their implications for the effectiveness of European climate policy. Drawing on official sectoral data and employing a multi-method framework combining time series modelling (ARIMA), machine learning (Random Forest), regime-switching analysis, and segmented linear regression, we assess past dynamics, detect structural shifts, and forecast future trends. Empirical findings, based on Markov-switching models and segmented regression analysis, indicate a statistically significant regime change around 2014, marking a transition to a new emissions pattern characterised by a deceleration in reduction rates. While the energy sector experienced the most significant decline, agriculture and industry have gained relative prominence, underscoring their growing strategic importance as targets for policy interventions. Hybrid ARIMA–ML forecasts indicate that, under current trajectories, the EU is unlikely to meet its 2030 Fit for 55 targets without adaptive and sector-specific interventions, with a projected shortfall of 12–15 percentage points relative to 1990 levels, excluding LULUCF. The results underscore critical weaknesses in the EU’s climate policy architecture and reveal a clear need for transformative recalibration. Without accelerated action and strengthened governance mechanisms, the post-2014 regime risks entrenching a plateau in emissions reductions, jeopardising long-term climate objectives. Full article
(This article belongs to the Section Air, Climate Change and Sustainability)
Show Figures

Figure 1

24 pages, 1834 KB  
Article
SDA-Net: A Symmetric Dual-Attention Network with Multi-Scale Convolution for MOOC Dropout Prediction
by Yiwen Yang, Chengjun Xu and Guisheng Tian
Symmetry 2026, 18(1), 202; https://doi.org/10.3390/sym18010202 - 21 Jan 2026
Viewed by 80
Abstract
With the rapid development of Massive Open Online Courses (MOOCs), high dropout rates have become a major challenge, limiting the quality of online education and the effectiveness of targeted interventions. Although existing MOOC dropout prediction methods have incorporated deep learning and attention mechanisms [...] Read more.
With the rapid development of Massive Open Online Courses (MOOCs), high dropout rates have become a major challenge, limiting the quality of online education and the effectiveness of targeted interventions. Although existing MOOC dropout prediction methods have incorporated deep learning and attention mechanisms to improve predictive performance to some extent, they still face limitations in modeling differences in course difficulty and learning engagement, capturing multi-scale temporal learning behaviors, and controlling model complexity. To address these issues, this paper proposes a MOOC dropout prediction model that integrates multi-scale convolution with a symmetric dual-attention mechanism, termed SDA-Net. In the feature modeling stage, the model constructs a time allocation ratio matrix (MRatio), a resource utilization ratio matrix (SRatio), and a relative group-level ranking matrix (Rank) to characterize learners’ behavioral differences in terms of time investment, resource usage structure, and relative performance, thereby mitigating the impact of course difficulty and individual effort disparities on prediction outcomes. Structurally, SDA-Net extracts learning behavior features at different temporal scales through multi-scale convolution and incorporates a symmetric dual-attention mechanism composed of spatial and channel attention to adaptively focus on information highly correlated with dropout risk, enhancing feature representation while maintaining a relatively lightweight architecture. Experimental results on the KDD Cup 2015 and XuetangX public datasets demonstrate that SDA-Net achieves more competitive performance than traditional machine learning methods, mainstream deep learning models, and attention-based approaches on major evaluation metrics; in particular, it attains an accuracy of 93.7% on the KDD Cup 2015 dataset and achieves an absolute improvement of 0.2 percentage points in Accuracy and 0.4 percentage points in F1-Score on the XuetangX dataset, confirming that the proposed model effectively balances predictive performance and model complexity. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 2430 KB  
Article
Improved Detection of Small (<2 cm) Hepatocellular Carcinoma via Deep Learning-Based Synthetic CT Hepatic Arteriography: A Multi-Center External Validation Study
by Jung Won Kwak, Sung Bum Cho, Ki Choon Sim, Jeong Woo Kim, In Young Choi and Yongwon Cho
Diagnostics 2026, 16(2), 343; https://doi.org/10.3390/diagnostics16020343 - 21 Jan 2026
Viewed by 82
Abstract
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) [...] Read more.
Background/Objectives: Early detection of hepatocellular carcinoma (HCC), particularly small lesions (<2 cm), which is crucial for curative treatment, remains challenging with conventional liver dynamic computed tomography (LDCT). We aimed to develop a deep learning algorithm to generate synthetic CT during hepatic arteriography (CTHA) from non-invasive LDCT and evaluate its lesion detection performance. Methods: A cycle-consistent generative adversarial network with an attention module [Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization (U-GAT-IT)] was trained using paired LDCT and CTHA images from 277 patients. The model was validated using internal (68 patients, 139 lesions) and external sets from two independent centers (87 patients, 117 lesions). Two radiologists assessed detection performance using a 5-point scale and the detection rate. Results: Synthetic CTHA significantly improved the detection of sub-centimeter (<1 cm) HCCs compared with LDCT in the internal set (69.6% vs. 47.8%, p < 0.05). This improvement was robust in the external set; synthetic CTHA detected a greater number of small lesions than LDCT. Quantitative metrics (structural similarity index measure and peak signal-to-noise ratio) indicated high structural fidelity. Conclusions: Deep-learning–based synthetic CTHA significantly enhanced the detection of small HCCs compared with standard LDCT, offering a non-invasive alternative with high detection sensitivity, which was validated across multicentric data. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

17 pages, 1555 KB  
Article
Path Planning in Sparse Reward Environments: A DQN Approach with Adaptive Reward Shaping and Curriculum Learning
by Hongyi Yang, Bo Cai and Yunlong Li
Algorithms 2026, 19(1), 89; https://doi.org/10.3390/a19010089 - 21 Jan 2026
Viewed by 166
Abstract
Deep reinforcement learning (DRL) has shown great potential in path planning tasks. However, in sparse reward environments, DRL still faces significant challenges such as low training efficiency and a tendency to converge to suboptimal policies. Traditional reward shaping methods can partially alleviate these [...] Read more.
Deep reinforcement learning (DRL) has shown great potential in path planning tasks. However, in sparse reward environments, DRL still faces significant challenges such as low training efficiency and a tendency to converge to suboptimal policies. Traditional reward shaping methods can partially alleviate these issues, but they typically rely on hand-crafted designs, which often introduce complex reward coupling, make hyperparameter tuning difficult, and limit generalization capability. To address these challenges, this paper proposes Curriculum-guided Learning with Adaptive Reward Shaping for Deep Q-Network (CLARS-DQN), a path planning algorithm that integrates Adaptive Reward Shaping (ARS) and Curriculum Learning (CL). The algorithm consists of two key components: (1) ARS-DQN, which augments the DQN framework with a learnable intrinsic reward function to reduce reward sparsity and dependence on expert knowledge; and (2) a curriculum strategy that guides policy optimization through a staged training process, progressing from simple to complex tasks to enhance generalization. Training also incorporates Prioritized Experience Replay (PER) to improve sample efficiency and training stability. CLARS-DQN outperforms baseline methods in task success rate, path quality, training efficiency, and hyperparameter robustness. In unseen environments, the method improves task success rate and average path length by 12% and 26%, respectively, demonstrating strong generalization. Ablation studies confirm the critical contribution of each module. Full article
Show Figures

Figure 1

40 pages, 5081 KB  
Article
HAO-AVP: An Entropy-Gini Reinforcement Learning Assisted Hierarchical Void Repair Protocol for Underwater Wireless Sensor Networks
by Lijun Hao, Chunbo Ma and Jun Ao
Sensors 2026, 26(2), 684; https://doi.org/10.3390/s26020684 - 20 Jan 2026
Viewed by 145
Abstract
Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for [...] Read more.
Wireless Sensor Networks (WSNs) are pivotal for data acquisition, yet reliability is severely constrained by routing voids induced by sparsity, uneven energy, and high dynamicity. To address these challenges, the Hybrid Acoustic-Optical Adaptive Void-handling Protocol (HAO-AVP) is proposed to satisfy the requirements for highly reliable communication in complex underwater environments. First, targeting uneven energy, a reinforcement learning mechanism utilizing Gini coefficient and entropy is adopted. By optimizing energy distribution, voids are proactively avoided. Second, to address routing interruptions caused by the high dynamicity of topology, a collaborative mechanism for active prediction and real-time identification is constructed. Specifically, this mechanism integrates a Markov chain energy prediction model with on-demand hop discovery technology. Through this integration, precise anticipation and rapid localization of potential void risks are achieved. Finally, to recover damaged links at the minimum cost, a four-level progressive recovery strategy, comprising intra-medium adjustment, cross-medium hopping, path backtracking, and Autonomous Underwater Vehicle (AUV)-assisted recovery, is designed. This strategy is capable of adaptively selecting recovery measures based on the severity of the void. Simulation results demonstrate that, compared with existing mainstream protocols, the void identification rate of the proposed protocol is improved by approximately 7.6%, 8.4%, 13.8%, 19.5%, and 25.3%, respectively, and the void recovery rate is increased by approximately 4.3%, 9.6%, 12.0%, 18.4%, and 24.2%, respectively. In particular, enhanced robustness and a prolonged network life cycle are exhibited in sparse and dynamic networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 3636 KB  
Article
A Hybrid VMD-SSA-LSTM Framework for Short-Term Wind Speed Prediction Based on Wind Farm Measurement Data
by Ruisheng Feng, Bin Fu, Hanxi Xiao, Xu Wang, Maoyu Zhang, Shuqin Zheng, Yanru Wang, Tingjun Xu and Lei Zhou
Energies 2026, 19(2), 517; https://doi.org/10.3390/en19020517 - 20 Jan 2026
Viewed by 116
Abstract
Aiming at the nonlinear and non-stationary characteristics of wind speed series, this study proposes a hybrid forecasting framework that integrates Variational Mode Decomposition (VMD), Sparrow Search Algorithm (SSA), and Long Short-Term Memory (LSTM) networks. First, VMD is employed to adaptively decompose the original [...] Read more.
Aiming at the nonlinear and non-stationary characteristics of wind speed series, this study proposes a hybrid forecasting framework that integrates Variational Mode Decomposition (VMD), Sparrow Search Algorithm (SSA), and Long Short-Term Memory (LSTM) networks. First, VMD is employed to adaptively decompose the original wind speed series into multiple Intrinsic Mode Functions (IMFs) with distinct frequency features, thereby reducing the non-stationarity of the original sequence. Second, SSA is utilized to adaptively optimize key parameters of the LSTM network, including the number of hidden units, learning rate, and dropout rate, to enhance the model’s capability in capturing complex temporal patterns. Finally, the predictions from all modal components are integrated to generate the final wind speed forecast. Experimental results based on 10-min resolution measured data from a coastal wind farm in southeastern China in June 2020 show that the model achieves a Root Mean Square Error (RMSE) of 0.208, a Mean Absolute Error (MAE) of 0.161, and a Mean Absolute Percentage Error (MAPE) of 3.284% on the test set, with its comprehensive performance significantly surpassing benchmark models such as LSTM, VMD-LSTM, MLP, XGBoost, and ARIMA. The limitations of this study mainly include the use of only one month of data for validation, the lack of segmented performance analysis across different wind speed regimes, and a fixed prediction horizon of 10 min. The results indicate that the proposed hybrid forecasting framework provides an effective approach with practical engineering potential for ultra-short-term wind power prediction. Full article
Show Figures

Figure 1

17 pages, 1621 KB  
Article
Reinforcement Learning-Based Optimization of Environmental Control Systems in Battery Energy Storage Rooms
by So-Yeon Park, Deun-Chan Kim and Jun-Ho Bang
Energies 2026, 19(2), 516; https://doi.org/10.3390/en19020516 - 20 Jan 2026
Viewed by 118
Abstract
This study proposes a reinforcement learning (RL)-based optimization framework for the environmental control system of battery rooms in Energy Storage Systems (ESS). Conventional rule-based air-conditioning strategies are unable to adapt to real-time temperature and humidity fluctuations, often leading to excessive energy consumption or [...] Read more.
This study proposes a reinforcement learning (RL)-based optimization framework for the environmental control system of battery rooms in Energy Storage Systems (ESS). Conventional rule-based air-conditioning strategies are unable to adapt to real-time temperature and humidity fluctuations, often leading to excessive energy consumption or insufficient thermal protection. To overcome these limitations, both value-based (DQN, Double DQN, Dueling DQN) and policy-based (Policy Gradient, PPO, TRPO) RL algorithms are implemented and systematically compared. The algorithms are trained and evaluated using one year of real ESS operational data and corresponding meteorological data sampled at 15-min intervals. Performance is assessed in terms of convergence speed, learning stability, and cooling-energy consumption. The experimental results show that the DQN algorithm reduces time-averaged cooling power consumption by 46.5% compared to conventional rule-based control, while maintaining temperature, humidity, and dew-point constraint violation rates below 1% throughout the testing period. Among the policy-based methods, the Policy Gradient algorithm demonstrates competitive energy-saving performance but requires longer training time and exhibits higher reward variance. These findings confirm that RL-based control can effectively adapt to dynamic environmental conditions, thereby improving both energy efficiency and operational safety in ESS battery rooms. The proposed framework offers a practical and scalable solution for intelligent thermal management in ESS facilities. Full article
Show Figures

Figure 1

20 pages, 3262 KB  
Article
Glass Fall-Offs Detection for Glass Insulated Terminals via a Coarse-to-Fine Machine-Learning Framework
by Weibo Li, Bingxun Zeng, Weibin Li, Nian Cai, Yinghong Zhou, Shuai Zhou and Hao Xia
Micromachines 2026, 17(1), 128; https://doi.org/10.3390/mi17010128 - 19 Jan 2026
Viewed by 133
Abstract
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light [...] Read more.
Glass-insulated terminals (GITs) are widely used in high-reliability microelectronic systems, where glass fall-offs in the sealing region may seriously degrade the reliability of the microelectronic component and further degrade the device reliability. Automatic inspection of such defects is challenging due to strong light reflection, irregular defect appearances, and limited defective samples. To address these issues, a coarse-to-fine machine-learning framework is proposed for glass fall-off detection in GIT images. By exploiting the circular-ring geometric prior of GITs, an adaptive sector partition scheme is introduced to divide the region of interest into sectors. Four categories of sector features, including color statistics, gray-level variations, reflective properties, and gradient distributions, are designed for coarse classification using a gradient boosting decision tree (GBDT). Furthermore, a sector neighbor (SN) feature vector is constructed from adjacent sectors to enhance fine classification. Experiments on real industrial GIT images show that the proposed method outperforms several representative inspection approaches, achieving an average IoU of 96.85%, an F1-score of 0.984, a pixel-level false alarm rate of 0.55%, and a pixel-level missed alarm rate of 35.62% at a practical inspection speed of 32.18 s per image. Full article
(This article belongs to the Special Issue Emerging Technologies and Applications for Semiconductor Industry)
Show Figures

Figure 1

20 pages, 433 KB  
Article
Hausdorff Difference-Based Adam Optimizer for Image Classification
by Jing Jian, Zhe Gao and Haibin Zhang
Mathematics 2026, 14(2), 329; https://doi.org/10.3390/math14020329 - 19 Jan 2026
Viewed by 96
Abstract
To address the limitations of fixed-order update mechanisms in convolutional neural network parameter training, an adaptive parameter training method based on the Hausdorff difference is proposed in this paper. By deriving a Hausdorff difference formula that is suitable for discrete training processes and [...] Read more.
To address the limitations of fixed-order update mechanisms in convolutional neural network parameter training, an adaptive parameter training method based on the Hausdorff difference is proposed in this paper. By deriving a Hausdorff difference formula that is suitable for discrete training processes and embedding it into the adaptive moment estimation framework, a generalized Hausdorff difference-based Adam algorithm (HAdam) is constructed. This algorithm introduces an order parameter to achieve joint dynamic control of the momentum intensity and the effective learning rate. Through theoretical analysis and numerical simulations, the influence of the order parameter and its value range on algorithm stability, parameter evolution trajectories, and convergence speed is investigated, and two adaptive order adjustment strategies based on iteration cycles and gradient feedback are designed. The experimental results on the Fashion-MNIST and CIFAR-10 benchmark datasets show that, compared with the standard Adam algorithm, the HAdam algorithm exhibits clear advantages in both convergence efficiency and recognition accuracy. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

Back to TopTop