Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,197)

Search Parameters:
Keywords = model learning for control

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1559 KB  
Review
AI-Based Modeling and Optimization of AC/DC Power Systems
by Izabela Rojek, Dariusz Mikołajewski, Piotr Prokopowicz and Maciej Piechowiak
Energies 2025, 18(21), 5660; https://doi.org/10.3390/en18215660 (registering DOI) - 28 Oct 2025
Abstract
This review examined the latest advances in the modeling, analysis, and control of AC/DC power systems based on artificial intelligence (AI) in which renewable energy sources play a significant role. Integrating variable and intermittent renewable energy sources (such as sunlight and wind power) [...] Read more.
This review examined the latest advances in the modeling, analysis, and control of AC/DC power systems based on artificial intelligence (AI) in which renewable energy sources play a significant role. Integrating variable and intermittent renewable energy sources (such as sunlight and wind power) poses a major challenge in maintaining system stability, reliability, and optimal system performance. Traditional modeling and control methods are increasingly inadequate to capture the complex, nonlinear, and dynamic behavior of modern hybrid AC/DC systems. Specialized AI techniques, such as machine learning (ML) and deep learning (DL), and hybrid models, have become important tools to meet these challenges. This article presents a comprehensive overview of AI-based methodologies for system identification, fault diagnosis, predictive control, and real-time optimization. Particular attention is paid to the role of AI in increasing grid resilience, implementing adaptive control strategies, and supporting decision-making under uncertainty. The review also highlights key breakthroughs in AI algorithms, including federated learning, and physics-based neural networks, which offer scalable and interpretable solutions. Furthermore, the article examines current limitations and open research problems related to data quality, computational requirements, and model generalizability. Case studies of smart grids and comparative scenarios demonstrate the practical effectiveness of AI-based approaches in real-world energy system applications. Finally, it proposes future directions to narrow the gap between AI research and industrial application in next-generation smart grids. Full article
18 pages, 2721 KB  
Article
Bayesian Network-Based Earth-Rock Dam Breach Probability Analysis Integrating Machine Learning
by Zongkun Li, Qing Shi, Heqiang Sun, Yingjian Zhou, Fuheng Ma, Jianyou Wang and Pieter van Gelder
Water 2025, 17(21), 3085; https://doi.org/10.3390/w17213085 - 28 Oct 2025
Abstract
Earth-rock dams are critical components of hydraulic engineering, undertaking core functions such as flood control and disaster mitigation. However, the potential occurrence of dam breach poses a severe threat to regional socioeconomic stability and ecological security. To address the limitations of traditional Bayesian [...] Read more.
Earth-rock dams are critical components of hydraulic engineering, undertaking core functions such as flood control and disaster mitigation. However, the potential occurrence of dam breach poses a severe threat to regional socioeconomic stability and ecological security. To address the limitations of traditional Bayesian network (BN) in capturing the complex nonlinear coupling and dynamic mutual interactions among risk factors, they are integrated with machine learning techniques, based on a collected dataset of earth-rock dam breach case samples, the PC structure learning algorithm was employed to preliminarily uncover risk associations. The dataset was compiled from public databases, including the U.S. Army Corps of Engineers (USACE) and Dam Safety Management Center of the Ministry of Water Resources of China, as well as engineering reports from provincial water conservancy departments in China and Europe. Expert knowledge was integrated to optimize the network topology, thereby correcting causal relationships inconsistent with engineering mechanisms. The results indicate that the established hybrid model achieved AUC, accuracy, and F1-Score values of 0.887, 0.895, and 0.899, respectively, significantly outperforming the data-driven model G1. Forward inference identified the key drivers elevating breach risk. Conversely, backward inference revealed that overtopping was the direct failure mode with the highest probability of occurrence and the greatest contribution. The integration of data-driven approaches and domain knowledge provides theoretical and technical support for the probabilistic quantification of earth-rock dam breach and risk prevention and control decision-making. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

20 pages, 3937 KB  
Article
Prediction and Control of Hovercraft Cushion Pressure Based on Deep Reinforcement Learning
by Hua Zhou, Lijing Dong and Yuanhui Wang
J. Mar. Sci. Eng. 2025, 13(11), 2058; https://doi.org/10.3390/jmse13112058 - 28 Oct 2025
Abstract
This paper proposes a deep reinforcement learning-based predictive control scheme to address cushion pressure prediction and stabilization in hovercraft systems subject to modeling complexity, dynamic instability, and system delay. Notably, this work introduces a long short-term memory (LSTM) network with a temporal sliding [...] Read more.
This paper proposes a deep reinforcement learning-based predictive control scheme to address cushion pressure prediction and stabilization in hovercraft systems subject to modeling complexity, dynamic instability, and system delay. Notably, this work introduces a long short-term memory (LSTM) network with a temporal sliding window specifically designed for hovercraft cushion pressure forecasting. The model accurately captures the dynamic coupling between fan speed and chamber pressure while explicitly incorporating inherent control lag during airflow transmission. Furthermore, a novel adaptive behavior cloning mechanism is embedded into the twin delayed deep deterministic policy gradient with behavior cloning (TD3-BC) framework, which dynamically balances reinforcement learning (RL) objectives and historical policy constraints through an auto-adjusted weighting coefficient. This design effectively mitigates distribution shift and policy degradation in offline reinforcement learning, ensuring both training stability and performance beyond the behavior policy. By integrating the LSTM prediction model with the adaptive TD3-BC algorithm, a fully data-driven control architecture is established. Finally, simulation results demonstrate that the proposed method achieves high accuracy in cushion pressure tracking, significantly improves motion stability, and extends the operational lifespan of lift fans by reducing rotational speed fluctuations. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

14 pages, 3359 KB  
Article
Design Principles and Impact of a Learning Analytics Dashboard: Evidence from a Randomized MOOC Experiment
by Inma Borrella and Eva Ponce-Cueto
Appl. Sci. 2025, 15(21), 11493; https://doi.org/10.3390/app152111493 - 28 Oct 2025
Abstract
Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning on online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high [...] Read more.
Learning Analytics Dashboards (LADs) are increasingly deployed to support self-regulated learning on online courses. Yet many existing dashboards lack strong theoretical grounding, contextual alignment, or actionable feedback, and some designs have been shown to inadvertently discourage learners through excessive social comparison or high inference costs. In this study, we designed and evaluated a LAD grounded in the COPES model of self-regulated learning and tailored to a credit-bearing Massive Open Online Course (MOOC) using a data-driven approach. We conducted a randomized controlled trial with 8745 learners, comparing a control group, a dashboard without feedback, and a dashboard with ARCS-framed actionable feedback. The results showed that the dashboard with feedback significantly increased learners’ likelihood of verification (i.e., paying for the certification track), with mixed effects on engagement and no measurable impact on final grades. These findings suggest that dashboards are not uniformly beneficial: while feedback-supported LADs can enhance motivation and persistence, dashboards that lack interpretive support may impose cognitive burdens without improving outcomes. This study contributes to the literature on learning analytics by (1) articulating the design principles for theoretically and contextually grounded LADs and (2) providing experimental evidence on their impact in authentic MOOC settings. Full article
(This article belongs to the Special Issue Applications of Digital Technology and AI in Educational Settings)
Show Figures

Figure 1

17 pages, 980 KB  
Article
An Adaptive Learning Algorithm Based on Spiking Neural Network for Global Optimization
by Rui-Xuan Wang and Yu-Xuan Chen
Symmetry 2025, 17(11), 1814; https://doi.org/10.3390/sym17111814 - 28 Oct 2025
Abstract
The optimal computing ability of spiking neural networks (SNNs) mainly depends on the connection weights of their synapses and the thresholds that control the spiking. In order to realize the optimization calculation of different objective functions, it is necessary to modify the connection [...] Read more.
The optimal computing ability of spiking neural networks (SNNs) mainly depends on the connection weights of their synapses and the thresholds that control the spiking. In order to realize the optimization calculation of different objective functions, it is necessary to modify the connection weights adaptively and make the thresholds dynamically self-learning. However, it is very difficult to construct an adaptive learning algorithm for spiking neural networks due to the discontinuity of neuron spike sending process, which is also a fatal problem in this field. In this paper, an efficient adaptive learning algorithm for spiking neural networks is proposed, which adjusts the weights of synaptic connections by a learning factor adaptively and adjusts the probability of spike sending by the self-organizing learning method of the dynamic threshold, so as to achieve the goal of automatic global search optimization. The algorithm is applied to the learning task of global optimization, and the experimental results show that this algorithm has good stability and learning ability, and is effective in dealing with complex multi-objective optimization problems of spatiotemporal spike mode. Moreover, the proposed framework explicitly leverages problem and model symmetries. In Traveling Salesman Problems, distance symmetry (d(i, j) = d(j, i)) and tour permutation symmetry are preserved by our spike-train-based similarity and energy updates, which do not depend on node labels. Together with the homogeneous neuron dynamics and balanced excitatory–inhibitory populations, these symmetry-aware properties reduce the effective search space and enhance the convergence stability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

38 pages, 9358 KB  
Article
Generation of a Multi-Class IoT Malware Dataset for Cybersecurity
by Mazdak Maghanaki, Soraya Keramati, F. Frank Chen and Mohammad Shahin
Electronics 2025, 14(21), 4196; https://doi.org/10.3390/electronics14214196 (registering DOI) - 27 Oct 2025
Abstract
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware [...] Read more.
This study introduces a modular, behaviorally curated malware dataset suite consisting of eight independent sets, each specifically designed to represent a single malware class: Trojan, Mirai (botnet), ransomware, rootkit, worm, spyware, keylogger, and virus. In contrast to earlier approaches that aggregate all malware into large, monolithic collections, this work emphasizes the selection of features unique to each malware type. Feature selection was guided by established domain knowledge and detailed behavioral telemetry obtained through sandbox execution and a subsequent report analysis on the AnyRun platform. The datasets were compiled from two primary sources: (i) the AnyRun platform, which hosts more than two million samples and provides controlled, instrumented sandbox execution for malware, and (ii) publicly available GitHub repositories. To ensure data integrity and prevent cross-contamination of behavioral logs, each sample was executed in complete isolation, allowing for the precise capture of both static attributes and dynamic runtime behavior. Feature construction was informed by operational signatures characteristic of each malware category, ensuring that the datasets accurately represent the tactics, techniques, and procedures distinguishing one class from another. This targeted design enabled the identification of subtle but significant behavioral markers that are frequently overlooked in aggregated datasets. Each dataset was balanced to include benign, suspicious, and malicious samples, thereby supporting the training and evaluation of machine learning models while minimizing bias from disproportionate class representation. Across the full suite, 10,000 samples and 171 carefully curated features were included. This constitutes one of the first dataset collections intentionally developed to capture the behavioral diversity of multiple malware categories within the context of Internet of Things (IoT) security, representing a deliberate effort to bridge the gap between generalized malware corpora and class-specific behavioral modeling. Full article
Show Figures

Graphical abstract

27 pages, 3834 KB  
Article
An Intelligent Framework for Energy Forecasting and Management in Photovoltaic-Integrated Smart Homes in Tunisia with V2H Support Using LSTM Optimized by the Harris Hawks Algorithm
by Aymen Mnassri, Nouha Mansouri, Sihem Nasri, Abderezak Lashab, Juan C. Vasquez and Adnane Cherif
Energies 2025, 18(21), 5635; https://doi.org/10.3390/en18215635 (registering DOI) - 27 Oct 2025
Abstract
This paper presents an intelligent hybrid framework for short-term energy consumption forecasting and real-time energy management in photovoltaic (PV)-integrated smart homes with Vehicle-to-Home (V2H) systems, tailored to the Tunisian context. The forecasting module employs an Attention-based Long Short-Term Memory (LSTM) neural network, whose [...] Read more.
This paper presents an intelligent hybrid framework for short-term energy consumption forecasting and real-time energy management in photovoltaic (PV)-integrated smart homes with Vehicle-to-Home (V2H) systems, tailored to the Tunisian context. The forecasting module employs an Attention-based Long Short-Term Memory (LSTM) neural network, whose hyperparameters (learning rate, hidden units, temporal window size) are optimized using the Harris Hawks Optimization (HHO) algorithm. Simulation results show that the proposed LSTM-HHO model achieves a Root Mean Square Error (RMSE) of 269 Wh, a Mean Absolute Error (MAE) of 187 Wh, and a Mean Absolute Percentage Error (MAPE) of 9.43%, with R2 = 0.97, substantially outperforming conventional LSTM (RMSE: 945 Wh, MAPE: 51.05%) and LSTM-PSO (RMSE: 586 Wh, MAPE: 28.72%). These accurate forecasts are exploited by the Energy Management System (EMS) to optimize energy flows through dynamic appliance scheduling, HVAC load shifting, and coordinated operation of home and EV batteries. Compared with baseline operation, PV self-consumption increased by 18.6%, grid reliance decreased by 25%, and household energy costs were reduced by 17.3%. Cost savings are achieved via predictive and adaptive control that prioritizes PV utilization, shifts flexible loads to surplus periods, and hierarchically manages distributed storage (home battery for short-term balancing, EV battery for extended deficits). Overall, the proposed LSTM-HHO-based EMS provides a practical and effective pathway toward smart, sustainable, and cost-efficient residential energy systems, contributing directly to Tunisia’s energy transition goals. Full article
Show Figures

Figure 1

20 pages, 3577 KB  
Article
Hyperspectral Remote Sensing and Artificial Intelligence for High-Resolution Soil Moisture Prediction
by Ki-Sung Kim, Junwon Lee, Jeongjun Park, Gigwon Hong and Kicheol Lee
Water 2025, 17(21), 3069; https://doi.org/10.3390/w17213069 (registering DOI) - 27 Oct 2025
Abstract
Reliable field estimation of soil moisture supports hydrology and water resources management. This study develops a drone-based hyperspectral approach in which visible and near-infrared reflectance is paired one-to-one with gravimetric water content measured by oven drying, yielding 1000 matched samples. After standardization, outlier [...] Read more.
Reliable field estimation of soil moisture supports hydrology and water resources management. This study develops a drone-based hyperspectral approach in which visible and near-infrared reflectance is paired one-to-one with gravimetric water content measured by oven drying, yielding 1000 matched samples. After standardization, outlier control, ranked wavelength selection, and light feature engineering, several predictors were evaluated. Conventional machine learning methods, including simple and multiple regression and tree-based ensembles, were limited by band collinearity and piecewise approximations and therefore failed to meet the accuracy target. Gradient boosting reached the target but used different trade-offs in variable sensitivity. An artificial neural network with three hidden layers, rectified linear unit activations, and dropout was trained using a feature count sweep and early stopping. With ten predictors, the model achieved a coefficient of determination of 0.9557, demonstrating accurate mapping from hyperspectral reflectance to gravimetric water content and providing a reproducible framework suitable for larger, multi date acquisitions and operational decision support. Full article
Show Figures

Figure 1

22 pages, 2704 KB  
Article
Cross-Crop Transferability of Machine Learning Models for Early Stem Rust Detection in Wheat and Barley Using Hyperspectral Imaging
by Anton Terentev, Daria Kuznetsova, Alexander Fedotov, Olga Baranova and Danila Eremenko
Plants 2025, 14(21), 3265; https://doi.org/10.3390/plants14213265 - 25 Oct 2025
Viewed by 211
Abstract
Early plant disease detection is crucial for sustainable crop production and food security. Stem rust, caused by Puccinia graminis f. sp. tritici, poses a major threat to wheat and barley. This study evaluates the feasibility of using hyperspectral imaging and machine learning [...] Read more.
Early plant disease detection is crucial for sustainable crop production and food security. Stem rust, caused by Puccinia graminis f. sp. tritici, poses a major threat to wheat and barley. This study evaluates the feasibility of using hyperspectral imaging and machine learning for early detection of stem rust and examines the cross-crop transferability of diagnostic models. Hyperspectral datasets of wheat (Triticum aestivum L.) and barley (Hordeum vulgare L.) were collected under controlled conditions, before visible symptoms appeared. Multi-stage preprocessing, including spectral normalization and standardization, was applied to enhance data quality. Feature engineering focused on spectral curve morphology using first-order derivatives, categorical transformations, and extrema-based descriptors. Models based on Support Vector Machines, Logistic Regression, and Light Gradient Boosting Machine were optimized through Bayesian search. The best-performing feature set achieved F1-scores up to 0.962 on wheat and 0.94 on barley. Cross-crop transferability was evaluated using zero-shot cross-domain validation. High model transferability was confirmed, with F1 > 0.94 and minimal false negatives (<2%), indicating the universality of spectral patterns of stem rust. Experiments were conducted under controlled laboratory conditions; therefore, direct field transferability may be limited. These findings demonstrate that hyperspectral imaging with robust preprocessing and feature engineering enables early diagnostics of rust diseases in cereal crops. Full article
(This article belongs to the Special Issue Application of Optical and Imaging Systems to Plants)
Show Figures

Figure 1

42 pages, 4303 KB  
Systematic Review
The Road to Autonomy: A Systematic Review Through AI in Autonomous Vehicles
by Adrian Domenteanu, Paul Diaconu, Margareta-Stela Florescu and Camelia Delcea
Electronics 2025, 14(21), 4174; https://doi.org/10.3390/electronics14214174 (registering DOI) - 25 Oct 2025
Viewed by 96
Abstract
In the last decade, the incorporation of Artificial Intelligence (AI) with autonomous vehicles (AVs) has transformed transportation, mobility, and smart mobility systems. The present study provides a systematic review of global trends, applications, and challenges at the intersection of AI, including Machine Learning [...] Read more.
In the last decade, the incorporation of Artificial Intelligence (AI) with autonomous vehicles (AVs) has transformed transportation, mobility, and smart mobility systems. The present study provides a systematic review of global trends, applications, and challenges at the intersection of AI, including Machine Learning (ML), Deep Learning (DL), and autonomous vehicle technologies. Using data extracted from Clarivate Analytics’ Web of Science Core Collection and a set of specific keywords related to both AI and autonomous (electric) vehicles, this paper identifies the themes presented in the scientific literature using thematic maps and thematic map evolution analysis. Furthermore, the research topics are identified using both thematic maps, as well as Latent Dirichlet Allocation (LDA) and BERTopic, offering a more faceted insight into the research field as LDA enables the probabilistic discovery of high-level research themes, while BERTopic, based on transformer-based language models, captures deeper semantic patterns and emerging topics over time. This approach offers richer insights into the systematic review analysis, while comparison in the results obtained through the various methods considered leads to a better overview of the themes associated with the field of AI in autonomous vehicles. As a result, a strong correspondence can be observed between core topics, such as object detection, driving models, control, safety, cybersecurity and system vulnerabilities. The findings offer a roadmap for researchers and industry practitioners, by outlining critical gaps and discussing the opportunities for future exploration. Full article
Show Figures

Figure 1

29 pages, 23797 KB  
Article
Tone Mapping of HDR Images via Meta-Guided Bayesian Optimization and Virtual Diffraction Modeling
by Deju Huang, Xifeng Zheng, Jingxu Li, Ran Zhan, Jiachang Dong, Yuanyi Wen, Xinyue Mao, Yufeng Chen and Yu Chen
Sensors 2025, 25(21), 6577; https://doi.org/10.3390/s25216577 (registering DOI) - 25 Oct 2025
Viewed by 208
Abstract
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase [...] Read more.
This paper proposes a novel image tone-mapping framework that incorporates meta-learning, a psychophysical model, Bayesian optimization, and light-field virtual diffraction. First, we formalize the virtual diffraction process as a mathematical operator defined in the frequency domain to reconstruct high-dynamic-range (HDR) images through phase modulation, enabling the precise control of image details and contrast. In parallel, we apply the Stevens power law to simulate the nonlinear luminance perception of the human visual system, thereby adjusting the overall brightness distribution of the HDR image and improving the visual experience. Unlike existing methods that primarily emphasize structural fidelity, the proposed method strikes a balance between perceptual fidelity and visual naturalness. Secondly, an adaptive parameter tuning system based on Bayesian optimization is developed to conduct optimization of the Tone Mapping Quality Index (TMQI), quantifying uncertainty using probabilistic models to approximate the global optimum with fewer evaluations. Furthermore, we propose a task-distribution-oriented meta-learning framework: a meta-feature space based on image statistics is constructed, and task clustering is combined with a gated meta-learner to rapidly predict initial parameters. This approach significantly enhances the robustness of the algorithm in generalizing to diverse HDR content and effectively mitigates the cold-start problem in the early stage of Bayesian optimization, thereby accelerating the convergence of the overall optimization process. Experimental results demonstrate that the proposed method substantially outperforms state-of-the-art tone-mapping algorithms across multiple benchmark datasets, with an average improvement of up to 27% in naturalness. Furthermore, the meta-learning-guided Bayesian optimization achieves two- to five-fold faster convergence. In the trade-off between computational time and performance, the proposed method consistently dominates the Pareto frontier, achieving high-quality results and efficient convergence with a low computational cost. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 2242 KB  
Systematic Review
Artificial Intelligence for Optimizing Solar Power Systems with Integrated Storage: A Critical Review of Techniques, Challenges, and Emerging Trends
by Raphael I. Areola, Abayomi A. Adebiyi and Katleho Moloi
Electricity 2025, 6(4), 60; https://doi.org/10.3390/electricity6040060 (registering DOI) - 25 Oct 2025
Viewed by 222
Abstract
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean [...] Read more.
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean and dependable energy sources intensifies, the integration of artificial intelligence (AI) with solar systems, particularly those coupled with energy storage, has emerged as a promising and increasingly vital solution. It explores the practical applications of machine learning (ML), deep learning (DL), fuzzy logic, and emerging generative AI models, focusing on their roles in areas such as solar irradiance forecasting, energy management, fault detection, and overall operational optimisation. Alongside these advancements, the review also addresses persistent challenges, including data limitations, difficulties in model generalization, and the integration of AI in real-time control scenarios. We included peer-reviewed journal articles published between 2015 and 2025 that apply AI methods to PV + ESS, with empirical evaluation. We excluded studies lacking evaluation against baselines or those focusing solely on PV or ESS in isolation. We searched IEEE Xplore, Scopus, Web of Science, and Google Scholar up to 1 July 2025. Two reviewers independently screened titles/abstracts and full texts; disagreements were resolved via discussion. Risk of bias was assessed with a custom tool evaluating validation method, dataset partitioning, baseline comparison, overfitting risk, and reporting clarity. Results were synthesized narratively by grouping AI techniques (forecasting, MPPT/control, dispatch, data augmentation). We screened 412 records and included 67 studies published between 2018 and 2025, following a documented PRISMA process. The review revealed that AI-driven techniques significantly enhance performance in solar + battery energy storage system (BESS) applications. In solar irradiance and PV output forecasting, deep learning models in particular, long short-term memory (LSTM) and hybrid convolutional neural network–LSTM (CNN–LSTM) architectures repeatedly outperform conventional statistical methods, obtaining significantly lower Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and higher R-squared. Smarter energy dispatch and market-based storage decisions are made possible by reinforcement learning and deep reinforcement learning frameworks, which increase economic returns and lower curtailment risks. Furthermore, hybrid metaheuristic–AI optimisation improves control tuning and system sizing with increased efficiency and convergence. In conclusion, AI enables transformative gains in forecasting, dispatch, and optimisation for solar-BESSs. Future efforts should focus on explainable, robust AI models, standardized benchmark datasets, and real-world pilot deployments to ensure scalability, reliability, and stakeholder trust. Full article
Show Figures

Figure 1

25 pages, 3622 KB  
Article
Simple and Affordable Vision-Based Detection of Seedling Deficiencies to Relieve Labor Shortages in Small-Scale Cruciferous Nurseries
by Po-Jui Su, Tse-Min Chen and Jung-Jeng Su
Agriculture 2025, 15(21), 2227; https://doi.org/10.3390/agriculture15212227 - 25 Oct 2025
Viewed by 78
Abstract
Labor shortages in seedling nurseries, particularly in manual inspection and replanting, hinder operational efficiency despite advancements in automation. This study aims to develop a cost-effective, GPU-free machine vision system to automate the detection of deficient seedlings in plug trays, specifically for small-scale nursery [...] Read more.
Labor shortages in seedling nurseries, particularly in manual inspection and replanting, hinder operational efficiency despite advancements in automation. This study aims to develop a cost-effective, GPU-free machine vision system to automate the detection of deficient seedlings in plug trays, specifically for small-scale nursery operations. The proposed Deficiency Detection and Replanting Positioning (DDRP) machine integrates low-cost components including an Intel RealSense Depth Camera D435, Raspberry Pi 4B, stepper motors, and a programmable logic controller (PLC). It utilizes OpenCV’s Haar cascade algorithm, HSV color space conversion, and Otsu thresholding to enable real-time image processing without GPU acceleration. The proposed Deficiency Detection and Replanting Positioning (DDRP) machine integrates low-cost components including an Intel RealSense Depth Camera D435, Raspberry Pi 4B, stepper motors, and a programmable logic controller (PLC). It utilizes OpenCV’s Haar cascade algorithm, HSV color space conversion, and Otsu thresholding to enable real-time image processing without GPU acceleration. Under controlled laboratory conditions, the DDRP-Machine achieved high detection accuracy (96.0–98.7%) and precision rates (82.14–83.78%). Benchmarking against deep-learning models such as YOLOv5x and Mask R-CNN showed comparable performance, while requiring only one-third to one-fifth of the cost and avoiding complex infrastructure. The Batch Detection (BD) mode significantly reduced processing time compared to Continuous Detection (CD), enhancing real-time applicability. The DDRP-Machine demonstrates strong potential to improve seedling inspection efficiency and reduce labor dependency in nursery operations. Its modular design and minimal hardware requirements make it a practical and scalable solution for resource-limited environments. This study offers a viable pathway for small-scale farms to adopt intelligent automation without the financial burden of high-end AI systems. Future enhancements, adaptive lighting and self-learning capabilities, will further improve field robustness and including broaden its applicability across diverse nursery conditions. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

33 pages, 1433 KB  
Article
Hybrid Time Series Transformer–Deep Belief Network for Robust Anomaly Detection in Mobile Communication Networks
by Anita Ershadi Oskouei, Mehrdad Kaveh, Francisco Hernando-Gallego and Diego Martín
Symmetry 2025, 17(11), 1800; https://doi.org/10.3390/sym17111800 - 25 Oct 2025
Viewed by 194
Abstract
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, [...] Read more.
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, and degraded accuracy under heterogeneous and imbalanced real-world conditions. To overcome these limitations, a hybrid time series transformer–deep belief network (HTST-DBN) is introduced, integrating the sequential modeling strength of TST with the hierarchical feature representation of DBN, while an improved orchard algorithm (IOA) performs adaptive hyper-parameter optimization. The framework also embodies the concept of symmetry and asymmetry. The IOA introduces controlled symmetry-breaking between exploration and exploitation, while the TST captures symmetric temporal patterns in network traffic whose asymmetric deviations often indicate anomalies. The proposed method is evaluated across four benchmark datasets (ToN-IoT, 5G-NIDD, CICDDoS2019, and Edge-IoTset) that capture diverse network environments, including 5G core traffic, IoT telemetry, mobile edge computing, and DDoS attacks. Experimental evaluation is conducted by benchmarking HTST-DBN against several state-of-the-art models, including TST, bidirectional encoder representations from transformers (BERT), DBN, deep reinforcement learning (DRL), convolutional neural network (CNN), and random forest (RF) classifiers. The proposed HTST-DBN achieves outstanding performance, with the highest accuracy reaching 99.61%, alongside strong recall and area under the curve (AUC) scores. The HTST-DBN framework presents a scalable and reliable solution for anomaly detection in next-generation mobile networks. Its hybrid architecture, reinforced by hyper-parameter optimization, enables effective learning in complex, dynamic, and heterogeneous environments, making it suitable for real-world deployment in future 5G/6G infrastructures. Full article
(This article belongs to the Special Issue AI-Driven Optimization for EDA: Balancing Symmetry and Asymmetry)
Show Figures

Figure 1

18 pages, 6974 KB  
Article
Prior-Guided Residual Reinforcement Learning for Active Suspension Control
by Jiansen Yang, Shengkun Wang, Fan Bai, Min Wei, Xuan Sun and Yan Wang
Machines 2025, 13(11), 983; https://doi.org/10.3390/machines13110983 (registering DOI) - 24 Oct 2025
Viewed by 114
Abstract
Active suspension systems have gained significant attention for their capability to improve vehicle dynamics and energy efficiency. However, achieving consistent control performance under diverse and uncertain road conditions remains challenging. This paper proposes a prior-guided residual reinforcement learning framework for active suspension control. [...] Read more.
Active suspension systems have gained significant attention for their capability to improve vehicle dynamics and energy efficiency. However, achieving consistent control performance under diverse and uncertain road conditions remains challenging. This paper proposes a prior-guided residual reinforcement learning framework for active suspension control. The approach integrates a Linear Quadratic Regulator (LQR) as a prior controller to ensure baseline stability, while an enhanced Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm learns the residual control policy to improve adaptability and robustness. Moreover, residual connections and Long Short-Term Memory (LSTM) layers are incorporated into the TD3 structure to enhance dynamic modeling and training stability. The simulation results demonstrate that the proposed method achieves better control performance than passive suspension, a standalone LQR, and conventional TD3 algorithms. Full article
Show Figures

Figure 1

Back to TopTop