Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,906)

Search Parameters:
Keywords = optimized result memory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 5640 KB  
Article
Estimation of Winter Wheat SPAD Values by Integrating Spectral Feature Optimization and Machine Learning Algorithms
by Yufei Wang, Xuebing Wang, Jiang Sun, Zeyang Wen, Haoyong Wu, Lujie Xiao, Meichen Feng, Yu Zhao and Xianjie Gao
Agronomy 2026, 16(4), 489; https://doi.org/10.3390/agronomy16040489 (registering DOI) - 22 Feb 2026
Abstract
The chlorophyll content of plant leaves measured by the soil plant analysis development (SPAD) is an important indicator for measuring crop growth status and irrigation effect. The rapid, non-destructive and efficient estimation of crop SPAD values is of great significance to the field [...] Read more.
The chlorophyll content of plant leaves measured by the soil plant analysis development (SPAD) is an important indicator for measuring crop growth status and irrigation effect. The rapid, non-destructive and efficient estimation of crop SPAD values is of great significance to the field management of crops. In this study, the canopy hyperspectral reflectance and SPAD values of winter wheat were obtained, and the spectral curve was changed through four spectral processing methods, including first-order differential (FD), second-order differential (SD), multivariate scattering correction (MSC), and Savitzky–Golay smoothing (SG) to improve the correlation between canopy spectral reflectance and SPAD. Furthermore, to investigate and evaluate the performance of various vegetation indices (VIs) in estimating SPAD values for winter wheat, existing published indices were optimized using random band combinations derived from multiple canopy spectral transformations. The optimized vegetation index was used as the input variable of the model, and six machine learning algorithms, including random forest (RF), long short-term memory network (LSTM), multilayer perceptron (MLP), deep recurrent neural network (Deep-RNN), gated recurrent unit (GRU), and convolutional neural network (CNN), were used to construct the winter wheat SPAD values estimation model, and the model was verified. The experimental results demonstrate that, when utilizing an equivalent number of optimized vegetation indices as input, the GRU-based model achieves higher estimation accuracy compared to other models. Specifically, the coefficient of determination (R2) is improved by 0.12 compared to the RF model, by 0.03 compared to the LSTM model, by 0.12 compared to the MLP model, by 0.02 compared to the Deep-RNN model, and by 0.02 compared to the CNN model. At the same time, the GRU model also has a lower root mean square error (RMSE) and relative error (RE) of 7.37 and 24.90%, respectively. This study provides valuable hyperspectral remote sensing technology support for the implementation of winter wheat SPAD values estimation in the field. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

31 pages, 1764 KB  
Article
Simulation of Reservoir Group Outflow Using LSTM with a Knowledge-Guided Loss Function Coordinated by the MDUPLEX Algorithm
by Qiaoping Liu, Changlu Qiao and Shuo Cao
Appl. Sci. 2026, 16(4), 2125; https://doi.org/10.3390/app16042125 (registering DOI) - 22 Feb 2026
Abstract
Global climate change and spatiotemporal heterogeneity in water resources exacerbate supply-demand imbalances. Accurate outflow simulation for joint reservoir group operations thus becomes critical for scientific water resources management. Existing data-driven models like the Long Short-Term Memory (LSTM) lack the robust integration of physical [...] Read more.
Global climate change and spatiotemporal heterogeneity in water resources exacerbate supply-demand imbalances. Accurate outflow simulation for joint reservoir group operations thus becomes critical for scientific water resources management. Existing data-driven models like the Long Short-Term Memory (LSTM) lack the robust integration of physical constraints. Traditional mechanistic methods, by contrast, lack generality and stability under complex hydrological conditions. To address this limitation, we propose MDUPLEX-KG-LSTM—a physically constrained data-driven model for reservoir outflow simulation. The model incorporates multi-round DUPLEX (MDUPLEX) data partitioning, which ensures statistical homogeneity across training, validation, and test datasets. It also features a Knowledge-Guided (KG) loss function that embeds core physical constraints: water balance, dead water level, flood season restricted water level, and inter-reservoir re-regulation mechanisms. Additionally, it adopts an LSTM network optimized via Particle Swarm Optimization (PSO) for enhanced predictive performance. We validate the model using daily hydrological data from 2010 to 2025 for three reservoirs in the Wujiaqu Irrigation District of Xinjiang, China. The model exhibits exceptional stability and predictive accuracy across key evaluation metrics: Nash–Sutcliffe Efficiency (NSE) ≥ 0.82, Pearson correlation coefficient (r) > 0.94, Root Mean Square Error (RMSE) ≤ 1.50 m3/s, and Water Balance Index (WBI) ≤ 0.016. It outperforms conventional data-driven and mechanistic models in extreme flow simulation scenarios. It also eliminates unphysical negative outflow values in all predictive results. The model achieves 100% compliance with flood control standards and an irrigation guarantee rate of no less than 86%. This study advances the development of physically constrained data-driven modeling for water resources engineering. It provides reliable methodological support for the intelligent operation of reservoir groups in smart water conservancy systems. The model also balances training cost and inference efficiency effectively. It demonstrates verified scalability for reservoir groups of varying scales, fully meeting the operational deployment requirements of smart water systems. Full article
15 pages, 1372 KB  
Article
GANimate: Ultra-Efficient Lip-Landmark-Driven Talking Face Animation Using a Learned Kalman Filter on GAN Feature Latent Space for Human–Computer Interaction on Mobile Devices
by Ethan Fenakel, Ben Ohayon and Dan Raviv
Sensors 2026, 26(4), 1377; https://doi.org/10.3390/s26041377 (registering DOI) - 22 Feb 2026
Abstract
We present GANimate, a lightweight method for animating talking faces that leverages recent advances in latent-space manipulation of Generative Adversarial Networks (GANs). Unlike existing approaches based on computationally intensive diffusion models, transformers, or complex 3DMM representations, which are impractical for mobile and other [...] Read more.
We present GANimate, a lightweight method for animating talking faces that leverages recent advances in latent-space manipulation of Generative Adversarial Networks (GANs). Unlike existing approaches based on computationally intensive diffusion models, transformers, or complex 3DMM representations, which are impractical for mobile and other low-resource edge devices due to high memory and compute demands, GANimate is designed for efficient operation on low-memory, low-compute edge devices. The model operates on 2D lip landmarks extracted from standard mobile vision-sensor inputs and requires no pre-training, making it easily integrable with any lip-landmark generator. Through an optimization process in the GAN feature latent space, these landmarks act as geometric constraints to animate a static portrait, producing realistic and expressive lip movements. To maintain stability and visual coherence across frames, we employ a Kalman filter to detect and track lip landmarks during video synthesis, enabling adaptive refinement and improved temporal consistency. The result is a compact and modular framework that bridges the gap between performance and accessibility in talking face synthesis, delivering high-quality and stable animations with minimal computational overhead. GANimate represents an important step toward lifelike, real-time avatars suitable for sensor-enabled and mobile human–computer interaction. Full article
(This article belongs to the Section Sensing and Imaging)
25 pages, 1279 KB  
Article
SSKD: Stepwise Self-Knowledge Distillation for Binary Neural Networks in Keyword Spotting
by Hailong Zou, Jionghao Zhang, Jun Li, Hang Ran, Wulve Yang, Rui Zhou, Zenghui Yu, Yi Zhan and Shushan Qiao
Appl. Sci. 2026, 16(4), 2021; https://doi.org/10.3390/app16042021 - 18 Feb 2026
Viewed by 89
Abstract
The hardware power-aware keyword spotting (KWS) implementation requires small memory footprint, low-complex computation, and high accuracy performances. Binary neural networks (BNNs) naturally satisfy these constraints. They quantize both weights and activations to 1-bit. This reduces storage and replaces most multiply–accumulate operations with bitwise [...] Read more.
The hardware power-aware keyword spotting (KWS) implementation requires small memory footprint, low-complex computation, and high accuracy performances. Binary neural networks (BNNs) naturally satisfy these constraints. They quantize both weights and activations to 1-bit. This reduces storage and replaces most multiply–accumulate operations with bitwise operations. However, such extreme quantization incurs substantial information loss and leaves a noticeable accuracy gap relative to full-precision models. Optimization is also more difficult because the sign function is non-differentiable, and surrogate-gradient updates introduce gradient mismatch. To preserve the hardware benefits of BNNs while alleviating the accuracy degeneration induced by 1-bit quantization, this article addresses the problem from two complementary aspects: Firstly, a Stepwise Self-Knowledge Distillation (SSKD) training approach is proposed to effectively improve the student BNN’s accuracy performance. The SSKD training framework achieves effective supervision for student BNNs. A Stepwise Training Strategy is proposed to optimize the training stability and accuracy. Weight Scaling Factor improves the student’s representational capability. Secondly, an extremely lightweight Binary Temporal Convolutional ResNet (BTC-ResNet) is also proposed. The parameters and calculations inside the network are greatly reduced for the inference. Experiments on the GSCD v1 and GSCD v2 benchmarks demonstrate the effectiveness of our methods for low-power keyword spotting. For the 12-class task, BTC-ResNet14 achieves 97.23% accuracy on GSCD v1 and 97.31% on GSCD v2 with 0.75 Mb parameters and 1.35 M FLOPs. For the 35-class task on GSCD v2, it reaches 95.56% accuracy with 0.76 Mb parameters and 1.35 M FLOPs. These results indicate that our method achieves a competitive accuracy–efficiency balance relative to recent distillation-based BNN KWS baselines reported in the comparative experiments. All these studies are helpful and promising for future KWS deployment on low-power hardware devices. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 12051 KB  
Article
A Novel Hybrid Intelligent Optimization Framework for Shield Construction Parameters Based on CWG-LSTM-CPSOS
by Liang Li, Changming Hu, Zhipeng Wu, Lili Feng and Peng Zhang
Buildings 2026, 16(4), 826; https://doi.org/10.3390/buildings16040826 - 18 Feb 2026
Viewed by 235
Abstract
Reasonable adjustment of construction parameters is of great value to reduce surface settlement and ensure the safety of shield construction. A novel hybrid intelligent optimization framework based on combination weighting and gray correlation analysis methods (CWG), a long short-term memory (LSTM) model and [...] Read more.
Reasonable adjustment of construction parameters is of great value to reduce surface settlement and ensure the safety of shield construction. A novel hybrid intelligent optimization framework based on combination weighting and gray correlation analysis methods (CWG), a long short-term memory (LSTM) model and a chaotic particle swarm optimization with sigmoid-based acceleration coefficients (CPSOS) algorithm was proposed. The CWG method was employed to screen key construction parameters and determine the weights of various influencing factors of surface settlement, thereby constructing a CWG-LSTM prediction model for surface settlement. On this basis, the prediction model served as the objective function for optimizing construction parameters, and the CPSOS algorithm was used for the optimization of shield construction parameters. Based on the Qingdao Metro Line 4 in China, sample sets were collected to verify the performance of the developed framework. The CWG-LSTM model achieved coefficients of determination (R2) of 0.92 and 0.91 on the train and test sets, respectively, along with root mean square errors (RMSE) of 1.29 and 1.03, and mean absolute percentage errors (MAPE) of 15.60% and 17.18%. Its prediction ability surpasses that of other comparison models, such as the Gated Recurrent Unit, Random Forest, Transformer, and Multiple Linear Regression, demonstrating higher accuracy. Optimized construction parameters derived from the CWG-LSTM-CPSOS facilitated shield tunneling in the unconstructed section. All surface settlement monitoring results recorded during excavation fell within the safety threshold, demonstrating that the proposed hybrid intelligent optimization framework effectively manages surface settlement during shield tunneling and serves as a reliable optimization approach for construction parameters. Full article
Show Figures

Figure 1

24 pages, 8940 KB  
Article
Time Series-Based PM2.5 Concentration Prediction Model Incorporating Attention Mechanism
by Xiaolong Cheng, Moye Li, Yangzhong Ke, Bingzi Li and Yuemei Huang
Sustainability 2026, 18(4), 2038; https://doi.org/10.3390/su18042038 - 17 Feb 2026
Viewed by 217
Abstract
As a key indicator of air quality, effective forecasting of PM2.5 concentration can provide key technical support for the scientific and precise implementation of air pollution prevention and control. However, predicting PM2.5 concentrations faces challenges such as multiple influencing factors, long-term [...] Read more.
As a key indicator of air quality, effective forecasting of PM2.5 concentration can provide key technical support for the scientific and precise implementation of air pollution prevention and control. However, predicting PM2.5 concentrations faces challenges such as multiple influencing factors, long-term temporal dependencies, and inherent nonlinearity. Furthermore, traditional Long Short-Term Memory (LSTM) networks not only fail to effectively grasp the dependency relationships in long-time-span data, but also encounter difficulties in fully integrating and exploiting the information of numerous influencing factors. In order to solve these problems, a novel prediction model (OVMD–PeepholeLSTM–attention) for PM2.5 concentration was presented in this study, which includes Peephole Long Short-Term Memory (PeepholeLSTM), optimal variational mode decomposition (OVMD) and an attention mechanism (AM). In this study, K modal components result from the initial decomposition of PM2.5 monitoring data using OVMD. The obtained components are then individually predicted by the PeepholeLSTM–attention model, and the final prediction is reconstructed. The proposed model was comprehensively evaluated on PM2.5 concentration monitoring data sets from Guangzhou and Shenzhen in China from 2020 to 2022, through a series of comparative experiments. The model proposed in this study is shown by experimental results to reduce mean absolute error (MAE) by approximately 39%, root mean square error (RMSE) by 45%, and increases the fitting coefficient (R2) by 0.0457 in Guangzhou compared to the single PeepholeLSTM model. The corresponding improvements in Shenzhen are 45% for MAE, 51% for RMSE, and 0.0765 for R2. This indicates that the model proposed in this paper exhibits higher accuracy in terms of predicting PM2.5 concentrations, and the research results can provide a basis for quantitative assessment and scientific decision-making for the sustainable development of urban ecological environments. Full article
Show Figures

Figure 1

7 pages, 784 KB  
Proceeding Paper
Forecasting PM2.5 Concentrations with Machine Learning: Accuracy, Efficiency, and Public Health Implications
by Kyriakos Ovaliadis, Spyridon Mitropoulos, Vassilios Tsiantos and Ioannis Christakis
Eng. Proc. 2026, 124(1), 36; https://doi.org/10.3390/engproc2026124036 - 16 Feb 2026
Viewed by 121
Abstract
Nowadays, air quality is a major issue, especially in large cities. Apart from air pollution, particulate matter (PM), especially PM2.5, poses serious health risks to individuals with respiratory conditions. Accurate forecasting of PM levels is crucial to warn vulnerable populations and reduce exposure. [...] Read more.
Nowadays, air quality is a major issue, especially in large cities. Apart from air pollution, particulate matter (PM), especially PM2.5, poses serious health risks to individuals with respiratory conditions. Accurate forecasting of PM levels is crucial to warn vulnerable populations and reduce exposure. Machine learning models can effectively predict PM concentrations based on historical data and barometric conditions such as temperature and humidity. Such predictions can support timely public health interventions and environmental policy decisions. The selection of the optimal machine learning model for time series forecasting requires a careful balance between predictive accuracy and computational efficiency. This study evaluates a number of widely used models, such as Random Forest (RF), Long Short-Term Memory (LSTM), Convolutional Neural Network-LSTM (CNN–LSTM), Extreme Gradient Boosting (XGB/HistGradientBoosting), and hybrid approaches (LSTM embeddings + RF), in the context of time series forecasting for particulate matter (PM) concentrations. Performance is assessed using three key error metrics: Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Mean Absolute Scaled Error (MASE). Additionally, the computational demands and development complexity of each model are analyzed. The overall results are of great interest for each application model, and in more detail, it is shown that the best compromise between accuracy and efficiency can be achieved, while a corresponding prediction model with satisfactory predictive performance can be implemented. The results show that CNN–LSTM and hybrid approaches provide high accuracy, while tree-based models are computationally efficient, offering practical options for real-time forecasting systems. Full article
(This article belongs to the Proceedings of The 6th International Electronic Conference on Applied Sciences)
Show Figures

Figure 1

23 pages, 623 KB  
Article
Radiomics-Driven Hybrid Deep Learning for MRI-Based Prediction of Glioma Grade and 1p/19q Codeletion
by Abdullah Bin Sawad and Muhammad Binsawad
Tomography 2026, 12(2), 25; https://doi.org/10.3390/tomography12020025 - 15 Feb 2026
Viewed by 133
Abstract
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and [...] Read more.
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and genetic studies, thus underlining the need for non-invasive alternative approaches. Methods: We introduce a non-invasive radiomics framework that combines quantitative MRI features with sophisticated ML and DL approaches for glioma grading and 1p/19q codeletion status prediction. High-dimensional radiomic features characterizing tumor geometry, intensity, and texture were derived from preoperative MRI-based tumor delineations. Features were normalized and optimized using correlation-based feature selection. Several traditional ML classifiers were compared and contrasted with DL models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and a CNN-Long Short-Term Memory (LSTM) hybrid model tailored to exploit both spatial feature hierarchies and feature correlations. Model validation was conducted using five-fold cross-validation and an independent test dataset, with accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics. Results: Among all the models tested, the hybrid CNN-LSTM model performed the best, with an accuracy of 88.1% and an AUC of 0.93, outperforming conventional ML approaches and single-model DL architectures. Explainability analysis showed that the radiomic features of tumor heterogeneity and morphology had the most prominent impact on model performance. Conclusions: These findings indicate that the combination of radiomic features with hybrid DL models is capable of making non-invasive predictions of glioma grade and 1p/19q codeletion status. The new computational model has the potential to be used as a supplementary approach in precision neuro-oncology. Full article
Show Figures

Figure 1

31 pages, 3427 KB  
Article
A Data-Driven Method Based on Feature Engineering and Physics-Constrained LSTM-EKF for Lithium-Ion Battery SOC Estimation
by Yujuan Sun, Shaoyuan You, Fangfang Hu and Jiuyu Du
Batteries 2026, 12(2), 64; https://doi.org/10.3390/batteries12020064 - 14 Feb 2026
Viewed by 188
Abstract
Accurate estimation of the State of Charge (SOC) for lithium-ion batteries is a core function of the Battery Management System (BMS). However, LiFePO4 batteries present specific challenges for SOC estimation due to the characteristic plateau in their open-circuit voltage (OCV) versus SOC [...] Read more.
Accurate estimation of the State of Charge (SOC) for lithium-ion batteries is a core function of the Battery Management System (BMS). However, LiFePO4 batteries present specific challenges for SOC estimation due to the characteristic plateau in their open-circuit voltage (OCV) versus SOC relationship. Moreover, data-driven estimation approaches often face significant difficulties stemming from measurement noise and interference, the highly nonlinear internal dynamics of the battery, and the time-varying nature of key battery parameters. To address these issues, this paper proposes a Long Short-Term Memory (LSTM) model integrated with feature engineering, physical constraints, and the Extended Kalman Filter (EKF). First, the model’s temporal perception of the historical charge–discharge states of the battery is enhanced through the fusion of temporal voltage information. Second, a post-processing strategy based on physical laws is designed, utilizing the Particle Swarm Optimization (PSO) algorithm to search for optimal correction factors. Finally, the SOC obtained from the previous steps serves as the observation input to EKF filtering, enabling a probabilistically weighted fusion of the data-driven model output and the EKF to improve the model’s dynamic tracking performance. When applied to SOC estimation of LiFePO4 batteries under various operating conditions and temperatures ranging from 0 °C to 50 °C, the proposed model achieves average Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) as low as 0.46% and 0.56%, respectively. These results demonstrate the model’s excellent robustness, adaptability, and dynamic tracking capability. Additionally, the proposed approach only requires derived features from existing input data without the need for additional sensors, and the model exhibits low memory usage, showing considerable potential for practical BMS implementation. Furthermore, this study offers an effective technical pathway for state estimation under a “physical information–data-driven–filter fusion” framework, enabling accurate SOC estimation of lithium-ion batteries across multiple operating scenarios. Full article
17 pages, 928 KB  
Article
Dynamic Threshold Determination Method for Triggering Critical Rainfall in Mountainous Debris Flow
by Yixian Wang and Na He
Water 2026, 18(4), 484; https://doi.org/10.3390/w18040484 - 13 Feb 2026
Viewed by 173
Abstract
The initiation of debris flows in mountainous areas is dynamically influenced by multiple factors, including rainfall intensity, duration, and antecedent rainfall conditions. Traditional static threshold methods struggle to adapt to these dynamic environmental conditions. To address this issue, this paper proposes a dynamic [...] Read more.
The initiation of debris flows in mountainous areas is dynamically influenced by multiple factors, including rainfall intensity, duration, and antecedent rainfall conditions. Traditional static threshold methods struggle to adapt to these dynamic environmental conditions. To address this issue, this paper proposes a dynamic threshold determination method for the critical rainfall triggering debris flows in mountainous regions. Firstly, high-risk areas are identified based on the frequency ratio model, and the effective rainfall is quantified using the Crozier model. Subsequently, a combination of dynamic variables, such as soil saturation and safety factor, is constructed, and the Jensen–Shannon (JS) divergence is introduced for sensitivity screening to select the most relevant variables. These optimized variables are then fed into an LSTM-TCN (Long Short-Term Memory-Temporal Convolutional Network) framework to extract temporal features and predict the probability of debris flow occurrence time. Finally, real-time threshold determination is achieved by integrating the absolute rainfall energy with a dynamic threshold model. Test results demonstrate that this method can effectively quantify the dynamic nature of rainfall across different regions, screen key variables, and achieve threshold determination with high coverage (average of 0.978) and precise interval width (average of 0.023). This approach provides a more accurate and adaptive means of predicting and managing debris flow risks in mountainous areas, enhancing our ability to respond to these natural hazards in a timely and effective manner. Full article
(This article belongs to the Special Issue Intelligent Analysis, Monitoring and Assessment of Debris Flow)
Show Figures

Figure 1

21 pages, 1113 KB  
Article
A Dynamic Weight Deep Reinforcement Learning Approach for SDN Multi-Objective Optimization with Actuator Integration
by Jian Wang, Zhongxu Liu, Xianzhi Cao and Liusong Yang
Actuators 2026, 15(2), 114; https://doi.org/10.3390/act15020114 - 12 Feb 2026
Viewed by 223
Abstract
In recent years, the surge in network traffic has led to a substantial increase in energy consumption, making the construction of green and energy-efficient networks a critical challenge in the field of communications. Software-Defined Networking (SDN), with its centralized control characteristic, provides a [...] Read more.
In recent years, the surge in network traffic has led to a substantial increase in energy consumption, making the construction of green and energy-efficient networks a critical challenge in the field of communications. Software-Defined Networking (SDN), with its centralized control characteristic, provides a new paradigm for the collaborative scheduling of actuators. However, traditional distributed network architectures lack global regulation capabilities, resulting in low resource utilization. Moreover, existing SDN traffic management methods mostly adopt fixed-weight reward functions, which are difficult to adapt to the dynamic fluctuation of network traffic and device heterogeneity, failing to meet the real-time and stability requirements of actuators in control scenarios. To address these issues, this study proposes a Dynamic Weight Generation Deep Q-Network (DWG-DQN) framework. By integrating a Long Short-Term Memory (LSTM) network with the SDN actuator scheduling mechanism, the system dynamically generates adaptive weight vectors, enabling real-time collaborative optimization of energy consumption, load balancing, and bandwidth utilization. Experimental results demonstrate that in fat-tree topology experiments, the proposed method achieves a 12.23% increase in average reward, a 33.93% reduction in energy consumption, a 31.12% improvement in load balancing, and a 24.03% enhancement in bandwidth utilization. Compared with fixed-weight method, it consistently outperforms in key performance indicators. The dynamic weight generation mechanism effectively solves the multi-objective optimization problem of actuators in dynamic network environments, offering a viable solution for the intelligent scheduling of actuators in SDN-based green traffic management. Full article
(This article belongs to the Section Control Systems)
Show Figures

Figure 1

28 pages, 21245 KB  
Article
A Comparative Study of OCR Architectures for Korean License Plate Recognition: CNN–RNN-Based Models and MobileNetV3–Transformer-Based Models
by Seungju Lee and Gooman Park
Sensors 2026, 26(4), 1208; https://doi.org/10.3390/s26041208 - 12 Feb 2026
Viewed by 212
Abstract
This paper presents a systematic comparative study of optical character recognition (OCR) architectures for Korean license plate recognition under identical detection conditions. Although recent automatic license plate recognition (ALPR) systems increasingly adopt Transformer-based decoders, it remains unclear whether performance differences arise primarily from [...] Read more.
This paper presents a systematic comparative study of optical character recognition (OCR) architectures for Korean license plate recognition under identical detection conditions. Although recent automatic license plate recognition (ALPR) systems increasingly adopt Transformer-based decoders, it remains unclear whether performance differences arise primarily from sequence modeling strategies or from backbone feature representations. To address this issue, we employ a unified YOLOv12-based license plate detector and evaluate multiple OCR configurations, including a CNN with an Attention-LSTM decoder and a MobileNetV3 with a Transformer decoder. To ensure a fair comparison, a controlled ablation study is conducted in which the CNN backbone is fixed to ResNet-18 while varying only the sequence decoder. Experiments are performed on both static image datasets and tracking-based sequential datasets, assessing recognition accuracy, error characteristics, and processing speed across GPU and embedded platforms. The results demonstrate that the effectiveness of sequence decoders is highly dataset-dependent and strongly influenced by feature quality and region-of-interest (ROI) stability. Quantitative analysis further shows that tracking-induced error accumulation dominates OCR performance in sequential recognition scenarios. Moreover, Korean license plate–specific error patterns reveal failure modes not captured by generic OCR benchmarks. Finally, experiments on embedded platforms indicate that Transformer-based OCR models introduce significant computational and memory overhead, limiting their suitability for real-time deployment. These findings suggest that robust license plate recognition requires joint consideration of detection, tracking, and recognition rather than isolated optimization of OCR architectures. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 7498 KB  
Article
Optimizing Power Control in Generation Units: LSTM-Based Machine Learning for Enhanced Stability in Virtual Synchronous Generators
by Ahmed Khamees and Hüseyin Altınkaya
Electronics 2026, 15(4), 791; https://doi.org/10.3390/electronics15040791 - 12 Feb 2026
Viewed by 143
Abstract
The integration of inverter-based generation units, such as photovoltaic systems, wind turbines, and vehicle-to-grid (V2G) technologies, has introduced new challenges in maintaining power and frequency stability in modern power systems. Virtual Synchronous Generators (VSGs) have emerged as a promising solution to enhance system [...] Read more.
The integration of inverter-based generation units, such as photovoltaic systems, wind turbines, and vehicle-to-grid (V2G) technologies, has introduced new challenges in maintaining power and frequency stability in modern power systems. Virtual Synchronous Generators (VSGs) have emerged as a promising solution to enhance system stability; however, existing control methods often lack the robustness and flexibility needed to address deliberate and unplanned outages effectively. This paper presents a novel approach for optimizing power control in generation units using a Long Short-Term Memory (LSTM)-based machine learning method. The proposed LSTM-based controller provides a fast and real-time response, ensuring robust and flexible performance under varying operational conditions. Unlike traditional controllers, the proposed method effectively handles nonlinearities and uncertainties associated with inverter-based units. Additionally, it effectively balances technical and economic aspects of power system operation by minimizing oscillations and optimizing resource utilization. The proposed approach is benchmarked against conventional control methods through a detailed simulation-based comparative analysis against a linear Model Predictive Control strategy under identical operating conditions. Simulation results indicate that the proposed controller reduces frequency deviations by up to 66.7%, voltage deviations by 62.5%, and total operational cost by approximately 11.3%, while achieving nearly 90% faster dynamic response, validating its effectiveness for modern power systems. Full article
Show Figures

Figure 1

18 pages, 4153 KB  
Article
DC Series Arc Fault Detection in Photovoltaic Systems Using a Hybrid WDCNN-BiLSTM-CA Model
by Liang Zhou, Manman Hou, Zheng Zeng, Jingyi Zhao, Chi-Min Shu and Huiling Jiang
Fire 2026, 9(2), 84; https://doi.org/10.3390/fire9020084 - 12 Feb 2026
Viewed by 281
Abstract
Arc fault is the dominant cause of fire in photovoltaic (PV) systems, making its accurate identification crucial for PV fire prevention. This study investigates the influence of voltage (200, 300, and 400 V) and current (3, 5, 7, 9, and 11 A) on [...] Read more.
Arc fault is the dominant cause of fire in photovoltaic (PV) systems, making its accurate identification crucial for PV fire prevention. This study investigates the influence of voltage (200, 300, and 400 V) and current (3, 5, 7, 9, and 11 A) on the DC series arc fault characteristics in PV systems obtained through experimental analysis. The results show that voltage variation has a negligible impact on arc fault behavior, while higher current levels substantially increase noise in the arc fault signals. To effectively mitigate noise, this paper proposes a denoising method that combines an improved moss growth optimization algorithm (IMGO) with improved complete ensemble empirical mode decomposition featuring adaptive noise (ICEEMDAN). It is found that the IMGO-ICEEMDAN denoising algorithm can effectively diminish noise in current signals, broaden characteristic frequency bands, and ameliorate arc feature discernibility. Subsequently, an integrated multi-scale spatiotemporal model is developed to extract arc fault features from the denoised signals. The model employs wide deep convolutional neural networks (WDCNNs) and bidirectional long short-term memory (BiLSTM) networks for parallel feature extraction, supplemented by a cross-attention (CA) module to optimize feature integration. The proposed WDCNN-BiLSTM-CA model ultimately achieves a detection accuracy of 99.89%, demonstrating superior detection performance over conventional methods, such as CNN-GRU and 1DCNN-LSTM models. This work provides a reliable framework for arc fault detection and fire risk reduction in PV systems. Full article
(This article belongs to the Special Issue Photovoltaic and Electrical Fires: 2nd Edition)
Show Figures

Figure 1

26 pages, 1541 KB  
Article
A Long Short-Term Memory with Deep Q-Learning and Bayesian Optimization Control Framework for Robust Position Regulation of Uncertain Electro-Hydraulic Actuators
by Duc Thanh Phan, Hoai Vu Anh Truong and Kyoung Kwan Ahn
Mathematics 2026, 14(4), 640; https://doi.org/10.3390/math14040640 - 11 Feb 2026
Viewed by 160
Abstract
The existence of friction, flow–pressure coupling, load variations, internal leakage, and other fluidic nonlinearities makes it challenging to design classical model-based controllers for servo-valve-driven electro-hydraulic actuators (EHAs). To address these issues and achieve high-precision output tracking, this paper proposes a learning-based control framework [...] Read more.
The existence of friction, flow–pressure coupling, load variations, internal leakage, and other fluidic nonlinearities makes it challenging to design classical model-based controllers for servo-valve-driven electro-hydraulic actuators (EHAs). To address these issues and achieve high-precision output tracking, this paper proposes a learning-based control framework that integrates Long Short-Term Memory with Deep Q-Learning and Bayesian Optimization (BO–LSTM–DQN) for high-precision position regulation of servo-valve-driven EHAs. In this framework, the LSTM augments Q-learning with temporal memory to first establish and infer hidden dynamics from measured sequences. Meanwhile, Bayesian Optimization is used to automatically optimize key hyperparameters to improve convergence and policy stability, without requiring manual trial-and-error. Additionally, a constraint-aware reward function is formulated to encode realistic servo-valve operational limits and satisfy motion stability requirements. The effectiveness of the proposed control strategy is verified through comparative simulations with PID– and BO–DQN-based controllers under different operating scenarios, subject to load disturbance and internal leakage. Furthermore, to evaluate the robustness of the proposed controller against parametric uncertainties, extensive Monte Carlo simulations are conducted with simultaneous variations of up to 50% in five key system parameters. The results demonstrate that the proposed BO–LSTM–DQN framework achieves a significant reduction in Root Mean Square Error (RMSE) by up to 51.79% compared with the conventional PID and maintains superior stability over the optimized DQN baselines, confirming its effectiveness for real-world EHA applications under extreme operating conditions. Full article
Show Figures

Figure 1

Back to TopTop