Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,362)

Search Parameters:
Keywords = deep generative model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 6271 KB  
Article
Estimating Fractional Land Cover Using Sentinel-2 and Multi-Source Data with Traditional Machine Learning and Deep Learning Approaches
by Sergio Sierra, Rubén Ramo, Marc Padilla, Laura Quirós and Adolfo Cobo
Remote Sens. 2025, 17(19), 3364; https://doi.org/10.3390/rs17193364 (registering DOI) - 4 Oct 2025
Abstract
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the [...] Read more.
Land cover mapping is essential for territorial management due to its links with ecological, hydrological, climatic, and socioeconomic processes. Traditional methods use discrete classes per pixel, but this study proposes estimating cover fractions with Sentinel-2 imagery (20 m) and AI. We employed the French Land cover from Aerospace ImageRy (FLAIR) dataset (810 km2 in France, 19 classes), with labels co-registered with Sentinel-2 to derive precise fractional proportions per pixel. From these references, we generated training sets combining spectral bands, derived indices, and auxiliary data (climatic and temporal variables). Various machine learning models—including XGBoost three deep neural network (DNN) architectures with different depths, and convolutional neural networks (CNNs)—were trained and evaluated to identify the optimal configuration for fractional cover estimation. Model validation on the test set employed RMSE, MAE, and R2 metrics at both pixel level (20 m Sentinel-2) and scene level (100 m FLAIR). The training set integrating spectral bands, vegetation indices, and auxiliary variables yielded the best MAE and RMSE results. Among all models, DNN2 achieved the highest performance, with a pixel-level RMSE of 13.83 and MAE of 5.42, and a scene-level RMSE of 4.94 and MAE of 2.36. This fractional approach paves the way for advanced remote sensing applications, including continuous cover-change monitoring, carbon footprint estimation, and sustainability-oriented territorial planning. Full article
(This article belongs to the Special Issue Multimodal Remote Sensing Data Fusion, Analysis and Application)
Show Figures

Figure 1

17 pages, 390 KB  
Review
Deep Learning Image Processing Models in Dermatopathology
by Apoorva Mehta, Mateen Motavaf, Danyal Raza, Neil Jairath, Akshay Pulavarty, Ziyang Xu, Michael A. Occidental, Alejandro A. Gru and Alexandra Flamm
Diagnostics 2025, 15(19), 2517; https://doi.org/10.3390/diagnostics15192517 (registering DOI) - 4 Oct 2025
Abstract
Dermatopathology has rapidly advanced due to the implementation of deep learning models and artificial intelligence (AI). From convolutional neural networks (CNNs) to transformer-based foundation models, these systems are now capable of accurate whole-slide analysis and multimodal integration. This review synthesizes the most recent [...] Read more.
Dermatopathology has rapidly advanced due to the implementation of deep learning models and artificial intelligence (AI). From convolutional neural networks (CNNs) to transformer-based foundation models, these systems are now capable of accurate whole-slide analysis and multimodal integration. This review synthesizes the most recent advents of deep-learning architecture and synthesizes its evolution from first-generation CNNs to hybrid CNN-transformer systems to large-scale foundational models such as Paige’s PanDerm AI and Virchow. Herein, we examine performance benchmarks from real-world deployments of major dermatopathology deep learning models (DermAI, PathAssist Derm), as well as emerging next-generation models still under research and development. We assess barriers to clinical workflow adoption such as dataset bias, AI interpretability, and government regulation. Further, we discuss potential future research directions and emphasize the need for diverse, prospectively curated datasets, explainability frameworks for trust in AI, and rigorous compliance to Good Machine-Learning-Practice (GMLP) to achieve safe and scalable deep learning dermatopathology models that can fully integrate into clinical workflows. Full article
(This article belongs to the Special Issue Artificial Intelligence in Skin Disorders 2025)
25 pages, 1601 KB  
Article
Evaluating Municipal Solid Waste Incineration Through Determining Flame Combustion to Improve Combustion Processes for Environmental Sanitation
by Jian Tang, Xiaoxian Yang, Wei Wang and Jian Rong
Sustainability 2025, 17(19), 8872; https://doi.org/10.3390/su17198872 (registering DOI) - 4 Oct 2025
Abstract
Municipal solid waste (MSW) refers to solid and semi-solid waste generated during human production and daily activities. The process of incinerating such waste, known as municipal solid waste incineration (MSWI), serves as a critical method for reducing waste volume and recovering resources. Automatic [...] Read more.
Municipal solid waste (MSW) refers to solid and semi-solid waste generated during human production and daily activities. The process of incinerating such waste, known as municipal solid waste incineration (MSWI), serves as a critical method for reducing waste volume and recovering resources. Automatic online recognition of flame combustion status during MSWI is a key technical approach to ensuring system stability, addressing issues such as high pollution emissions, severe equipment wear, and low operational efficiency. However, when manually selecting optimized features and hyperparameters based on empirical experience, the MSWI flame combustion state recognition model suffers from high time consumption, strong dependency on expertise, and difficulty in adaptively obtaining optimal solutions. To address these challenges, this article proposes a method for constructing a flame combustion state recognition model optimized based on reinforcement learning (RL), long short-term memory (LSTM), and parallel differential evolution (PDE) algorithms, achieving collaborative optimization of deep features and model hyperparameters. First, the feature selection and hyperparameter optimization problem of the ViT-IDFC combustion state recognition model is transformed into an encoding design and optimization problem for the PDE algorithm. Then, the mutation and selection factors of the PDE algorithm are used as modeling inputs for LSTM, which predicts the optimal hyperparameters based on PDE outputs. Next, during the PDE-based optimization of the ViT-IDFC model, a policy gradient reinforcement learning method is applied to determine the parameters of the LSTM model. Finally, the optimized combustion state recognition model is obtained by identifying the feature selection parameters and hyperparameters of the ViT-IDFC model. Test results based on an industrial image dataset demonstrate that the proposed optimization algorithm improves the recognition performance of both left and right grate recognition models, with the left grate achieving a 0.51% increase in recognition accuracy and the right grate a 0.74% increase. Full article
(This article belongs to the Section Waste and Recycling)
15 pages, 3389 KB  
Article
Photovoltaic Decomposition Method Based on Multi-Scale Modeling and Multi-Feature Fusion
by Zhiheng Xu, Peidong Chen, Ran Cheng, Yao Duan, Qiang Luo, Huahui Zhang, Zhenning Pan and Wencong Xiao
Energies 2025, 18(19), 5271; https://doi.org/10.3390/en18195271 (registering DOI) - 4 Oct 2025
Abstract
Deep learning-based Non-Intrusive Load Monitoring (NILM) methods have been widely applied to residential load identification. However, photovoltaic (PV) loads exhibit strong non-stationarity, high dependence on weather conditions, and strong coupling with multi-source data, which limit the accuracy and generalization of existing models. To [...] Read more.
Deep learning-based Non-Intrusive Load Monitoring (NILM) methods have been widely applied to residential load identification. However, photovoltaic (PV) loads exhibit strong non-stationarity, high dependence on weather conditions, and strong coupling with multi-source data, which limit the accuracy and generalization of existing models. To address these challenges, this paper proposes a multi-scale and multi-feature fusion framework for PV disaggregation, consisting of three modules: Multi-Scale Time Series Decomposition (MTD), Multi-Feature Fusion (MFF), and Temporal Attention Decomposition (TAD). These modules jointly capture short-term fluctuations, long-term trends, and deep dependencies across multi-source features. Experiments were conducted on real residential datasets from southern China. Results show that, compared with representative baselines such as SGN-Conv and MAT-Conv, the proposed method reduces MAE by over 60% and SAE by nearly 70% for some users, and it achieves more than 45% error reduction in cross-user tests. These findings demonstrate that the proposed approach significantly enhances both accuracy and generalization in PV load disaggregation. Full article
Show Figures

Figure 1

43 pages, 4746 KB  
Article
The BTC Price Prediction Paradox Through Methodological Pluralism
by Mariya Paskaleva and Ivanka Vasenska
Risks 2025, 13(10), 195; https://doi.org/10.3390/risks13100195 (registering DOI) - 4 Oct 2025
Abstract
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), [...] Read more.
Bitcoin’s extreme price volatility presents significant challenges for investors and traders, necessitating accurate predictive models to guide decision-making in cryptocurrency markets. This study compares the performance of machine learning approaches for Bitcoin price prediction, specifically examining XGBoost gradient boosting, Long Short-Term Memory (LSTM), and GARCH-DL neural networks using comprehensive market data spanning December 2013 to May 2025. We employed extensive feature engineering incorporating technical indicators, applied multiple machine and deep learning models configurations including standalone and ensemble approaches, and utilized cross-validation techniques to assess model robustness. Based on the empirical results, the most significant practical implication is that traders and financial institutions should adopt a dual-model approach, deploying XGBoost for directional trading strategies and utilizing LSTM models for applications requiring precise magnitude predictions, due to their superior continuous forecasting performance. This research demonstrates that traditional technical indicators, particularly market capitalization and price extremes, remain highly predictive in algorithmic trading contexts, validating their continued integration into modern cryptocurrency prediction systems. For risk management applications, the attention-based LSTM’s superior risk-adjusted returns, combined with enhanced interpretability, make it particularly valuable for institutional portfolio optimization and regulatory compliance requirements. The findings suggest that ensemble methods offer balanced performance across multiple evaluation criteria, providing a robust foundation for production trading systems where consistent performance is more valuable than optimization for single metrics. These results enable practitioners to make evidence-based decisions about model selection based on their specific trading objectives, whether focused on directional accuracy for signal generation or precision of magnitude for risk assessment and portfolio management. Full article
(This article belongs to the Special Issue Portfolio Theory, Financial Risk Analysis and Applications)
Show Figures

Figure 1

20 pages, 8591 KB  
Communication
Impact of Channel Confluence Geometry on Water Velocity Distributions in Channel Junctions with Inflows at Angles α = 45° and α = 60°
by Aleksandra Mokrzycka-Olek, Tomasz Kałuża and Mateusz Hämmerling
Water 2025, 17(19), 2890; https://doi.org/10.3390/w17192890 (registering DOI) - 4 Oct 2025
Abstract
Understanding flow dynamics in open-channel node systems is crucial for designing effective hydraulic engineering solutions and minimizing energy losses. This study investigates how junction geometry—specifically the lateral inflow angle (α = 45° and 60°) and the longitudinal bed slope (I = 0.0011 to [...] Read more.
Understanding flow dynamics in open-channel node systems is crucial for designing effective hydraulic engineering solutions and minimizing energy losses. This study investigates how junction geometry—specifically the lateral inflow angle (α = 45° and 60°) and the longitudinal bed slope (I = 0.0011 to 0.0051)—influences the water velocity distribution and hydraulic losses in a rigid-bed Y-shaped open-channel junction. Experiments were performed in a 0.3 m wide and 0.5 m deep rectangular flume, with controlled inflow conditions simulating steady-state discharge scenarios. Flow velocity measurements were obtained using a PEMS 30 electromagnetic velocity probe, which is capable of recording three-dimensional velocity components at a high spatial resolution, and electromagnetic flow meters for discharge control. The results show that a lateral inflow angle of 45° induces stronger flow disturbances and higher local loss coefficients, especially under steeper slope conditions. In contrast, an angle of 60° generates more symmetric velocity fields and reduces energy dissipation at the junction. These findings align with the existing literature and highlight the significance of junction design in hydraulic structures, particularly under high-flow conditions. The experimental data may be used for calibrating one-dimensional hydrodynamic models and optimizing the hydraulic performance of engineered channel outlets, such as those found in hydropower discharge systems or irrigation networks. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

21 pages, 3489 KB  
Article
GA-YOLOv11: A Lightweight Subway Foreign Object Detection Model Based on Improved YOLOv11
by Ning Guo, Min Huang and Wensheng Wang
Sensors 2025, 25(19), 6137; https://doi.org/10.3390/s25196137 (registering DOI) - 4 Oct 2025
Abstract
Modern subway platforms are generally equipped with platform screen door systems to enhance safety, but the gap between the platform screen doors and train doors may cause passengers or objects to become trapped, leading to accidents. Addressing the issues of excessive parameter counts [...] Read more.
Modern subway platforms are generally equipped with platform screen door systems to enhance safety, but the gap between the platform screen doors and train doors may cause passengers or objects to become trapped, leading to accidents. Addressing the issues of excessive parameter counts and computational complexity in existing foreign object intrusion detection algorithms, as well as false positives and false negatives for small objects, this article introduces a lightweight deep learning model based on YOLOv11n, named GA-YOLOv11. First, a lightweight GhostConv convolution module is introduced into the backbone network to reduce computational resource waste in irrelevant areas, thereby lowering model complexity and computational load. Additionally, the GAM attention mechanism is incorporated into the head network to enhance the model’s ability to distinguish features, enabling precise identification of object location and category, and significantly reducing the probability of false positives and false negatives. Experimental results demonstrate that in comparison to the original YOLOv11n model, the improved model achieves 3.3%, 3.2%, 1.2%, and 3.5% improvements in precision, recall, mAP@0.5, and mAP@0.5: 0.95, respectively. In contrast to the original YOLOv11n model, the number of parameters and GFLOPs were reduced by 18% and 7.9%, respectfully, while maintaining the same model size. The improved model is more lightweight while ensuring real-time performance and accuracy, designed for detecting foreign objects in subway platform gaps. Full article
(This article belongs to the Special Issue Image Processing and Analysis for Object Detection: 3rd Edition)
Show Figures

Figure 1

24 pages, 1024 KB  
Review
Artificial Intelligence in Glioma Diagnosis: A Narrative Review of Radiomics and Deep Learning for Tumor Classification and Molecular Profiling Across Positron Emission Tomography and Magnetic Resonance Imaging
by Rafail C. Christodoulou, Rafael Pitsillos, Platon S. Papageorgiou, Vasileia Petrou, Georgios Vamvouras, Ludwing Rivera, Sokratis G. Papageorgiou, Elena E. Solomou and Michalis F. Georgiou
Eng 2025, 6(10), 262; https://doi.org/10.3390/eng6100262 - 3 Oct 2025
Abstract
Background: This narrative review summarizes recent progress in artificial intelligence (AI), especially radiomics and deep learning, for non-invasive diagnosis and molecular profiling of gliomas. Methodology: A thorough literature search was conducted on PubMed, Scopus, and Embase for studies published from January [...] Read more.
Background: This narrative review summarizes recent progress in artificial intelligence (AI), especially radiomics and deep learning, for non-invasive diagnosis and molecular profiling of gliomas. Methodology: A thorough literature search was conducted on PubMed, Scopus, and Embase for studies published from January 2020 to July 2025, focusing on clinical and technical research. In key areas, these studies examine AI models’ predictive capabilities with multi-parametric Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). Results: The domains identified in the literature include the advancement of radiomic models for tumor grading and biomarker prediction, such as Isocitrate Dehydrogenase (IDH) mutation, O6-methylguanine-dna methyltransferase (MGMT) promoter methylation, and 1p/19q codeletion. The growing use of convolutional neural networks (CNNs) and generative adversarial networks (GANs) in tumor segmentation, classification, and prognosis was also a significant topic discussed in the literature. Deep learning (DL) methods are evaluated against traditional radiomics regarding feature extraction, scalability, and robustness to imaging protocol differences across institutions. Conclusions: This review analyzes emerging efforts to combine clinical, imaging, and histology data within hybrid or transformer-based AI systems to enhance diagnostic accuracy. Significant findings include the application of DL to predict cyclin-dependent kinase inhibitor 2A/B (CDKN2A/B) deletion and chemokine CCL2 expression. These highlight the expanding capabilities of imaging-based genomic inference and the importance of clinical data in multimodal fusion. Challenges such as data harmonization, model interpretability, and external validation still need to be addressed. Full article
Show Figures

Figure 1

23 pages, 838 KB  
Article
Applied with Caution: Extreme-Scenario Testing Reveals Significant Risks in Using LLMs for Humanities and Social Sciences Paper Evaluation
by Hua Liu, Ling Dai and Haozhe Jiang
Appl. Sci. 2025, 15(19), 10696; https://doi.org/10.3390/app151910696 - 3 Oct 2025
Abstract
The deployment of large language models (LLMs) in academic paper evaluation is increasingly widespread, yet their trustworthiness remains debated; to expose fundamental flaws often masked under conventional testing, this study employed extreme-scenario testing to systematically probe the lower performance boundaries of LLMs in [...] Read more.
The deployment of large language models (LLMs) in academic paper evaluation is increasingly widespread, yet their trustworthiness remains debated; to expose fundamental flaws often masked under conventional testing, this study employed extreme-scenario testing to systematically probe the lower performance boundaries of LLMs in assessing the scientific validity and logical coherence of papers from the humanities and social sciences (HSS). Through a highly credible quasi-experiment, 40 high-quality Chinese papers from philosophy, sociology, education, and psychology were selected, for which domain experts created versions with implanted “scientific flaws” and “logical flaws”. Three representative LLMs (GPT-4, DeepSeek, and Doubao) were evaluated against a baseline of 24 doctoral candidates, following a protocol progressing from ‘broad’ to ‘targeted’ prompts. Key findings reveal poor evaluation consistency, with significantly low intra-rater and inter-rater reliability for the LLMs, and limited flaw detection capability, as all models failed to distinguish between original and flawed papers under broad prompts, unlike human evaluators; although targeted prompts improved detection, LLM performance remained substantially inferior, particularly in tasks requiring deep empirical insight and logical reasoning. The study proposes that LLMs operate on a fundamentally different “task decomposition-semantic understanding” mechanism, relying on limited text extraction and shallow semantic comparison rather than the human process of “worldscape reconstruction → meaning construction and critique”, resulting in a critical inability to assess argumentative plausibility and logical coherence. It concludes that current LLMs possess fundamental limitations in evaluations requiring depth and critical thinking, are not reliable independent evaluators, and that over-trusting them carries substantial risks, necessitating rational human-AI collaborative frameworks, enhanced model adaptation through downstream alignment techniques like prompt engineering and fine-tuning, and improvements in general capabilities such as logical reasoning. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

38 pages, 2485 KB  
Review
Research Progress of Deep Learning-Based Artificial Intelligence Technology in Pest and Disease Detection and Control
by Yu Wu, Li Chen, Ning Yang and Zongbao Sun
Agriculture 2025, 15(19), 2077; https://doi.org/10.3390/agriculture15192077 - 3 Oct 2025
Abstract
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and [...] Read more.
With the rapid advancement of artificial intelligence technology, the widespread application of deep learning in computer vision is driving the transformation of agricultural pest detection and control toward greater intelligence and precision. This paper systematically reviews the evolution of agricultural pest detection and control technologies, with a special focus on the effectiveness of deep-learning-based image recognition methods for pest identification, as well as their integrated applications in drone-based remote sensing, spectral imaging, and Internet of Things sensor systems. Through multimodal data fusion and dynamic prediction, artificial intelligence has significantly improved the response times and accuracy of pest monitoring. On the control side, the development of intelligent prediction and early-warning systems, precision pesticide-application technologies, and smart equipment has advanced the goals of eco-friendly pest management and ecological regulation. However, challenges such as high data-annotation costs, limited model generalization, and constrained computing power on edge devices remain. Moving forward, further exploration of cutting-edge approaches such as self-supervised learning, federated learning, and digital twins will be essential to build more efficient and reliable intelligent control systems, providing robust technical support for sustainable agricultural development. Full article
27 pages, 8814 KB  
Article
A Numerical Simulation Investigation into the Impact of Proppant Embedment on Fracture Width in Coal Reservoirs
by Yi Zou, Desheng Zhou, Chen Lu, Yufei Wang, Haiyang Wang, Peng Zheng and Qingqing Wang
Processes 2025, 13(10), 3159; https://doi.org/10.3390/pr13103159 - 3 Oct 2025
Abstract
Deep coalbed methane reservoirs must utilize hydraulic fracturing technology to create high-conductivity sand-filled fractures for economical development. However, the mechanism by which proppant embedment affects fracture width in coal rock is not yet clear. In this article, using the discrete element particle flow [...] Read more.
Deep coalbed methane reservoirs must utilize hydraulic fracturing technology to create high-conductivity sand-filled fractures for economical development. However, the mechanism by which proppant embedment affects fracture width in coal rock is not yet clear. In this article, using the discrete element particle flow method, we have developed a numerical simulation model that can replicate the dynamic process of proppant embedment into the fracture surface. By tracking particle positions, we have accurately characterized the dynamic changes in fracture width and proppant embedment depth. The consistency between experimental measurements of average fracture width and numerical results demonstrates the reliability of our numerical model. Using this model, we analyzed the mechanisms by which different proppant particle sizes, number of layers, and closure stresses affect fracture width. The force among particles under different proppant embedment conditions and the induced stress field around the fracture were also studied. Numerical simulation results show that stress concentration formed by proppant embedment in the fracture surface leads to the generation of numerous induced micro-fractures. As the proppant grain size and closure stress increase, the stress concentration formed by proppant embedment in the fracture surface intensifies, and the variability in fracture width along the fracture length direction also increases. With more layers of proppant placement, the particles counteract some of the closure stress, thereby reducing the degree of proppant embedment around the fracture surface. Full article
(This article belongs to the Section Chemical Processes and Systems)
26 pages, 12288 KB  
Article
An Optimal Scheduling Method for Power Grids in Extreme Scenarios Based on an Information-Fusion MADDPG Algorithm
by Xun Dou, Cheng Li, Pengyi Niu, Dongmei Sun, Quanling Zhang and Zhenlan Dou
Mathematics 2025, 13(19), 3168; https://doi.org/10.3390/math13193168 - 3 Oct 2025
Abstract
With the large-scale integration of renewable energy into distribution networks, the intermittency and uncertainty of renewable generation pose significant challenges to the voltage security of the power grid under extreme scenarios. To address this issue, this paper proposes an optimal scheduling method for [...] Read more.
With the large-scale integration of renewable energy into distribution networks, the intermittency and uncertainty of renewable generation pose significant challenges to the voltage security of the power grid under extreme scenarios. To address this issue, this paper proposes an optimal scheduling method for power grids under extreme scenarios, based on an improved Multi-Agent Deep Deterministic Policy Gradient (MADDPG) algorithm. By simulating potential extreme scenarios in the power system and formulating targeted secure scheduling strategies, the proposed method effectively reduces trial-and-error costs. First, the time series clustering method is used to construct the extreme scene dataset based on the principle of maximizing scene differences. Then, a mathematical model of power grid optimal dispatching is constructed with the objective of ensuring voltage security, with explicit constraints and environmental settings. Then, an interactive scheduling model of distribution network resources is designed based on a multi-agent algorithm, including the construction of an agent state space, an action space, and a reward function. Then, an improved MADDPG multi-agent algorithm based on specific information fusion is proposed, and a hybrid optimization experience sampling strategy is developed to enhance the training efficiency and stability of the model. Finally, the effectiveness of the proposed method is verified by the case studies of the distribution network system. Full article
(This article belongs to the Special Issue Artificial Intelligence and Game Theory)
Show Figures

Figure 1

25 pages, 3675 KB  
Article
Gesture-Based Physical Stability Classification and Rehabilitation System
by Sherif Tolba, Hazem Raafat and A. S. Tolba
Sensors 2025, 25(19), 6098; https://doi.org/10.3390/s25196098 - 3 Oct 2025
Abstract
This paper introduces the Gesture-Based Physical Stability Classification and Rehabilitation System (GPSCRS), a low-cost, non-invasive solution for evaluating physical stability using an Arduino microcontroller and the DFRobot Gesture and Touch sensor. The system quantifies movement smoothness, consistency, and speed by analyzing “up” and [...] Read more.
This paper introduces the Gesture-Based Physical Stability Classification and Rehabilitation System (GPSCRS), a low-cost, non-invasive solution for evaluating physical stability using an Arduino microcontroller and the DFRobot Gesture and Touch sensor. The system quantifies movement smoothness, consistency, and speed by analyzing “up” and “down” hand gestures over a fixed period, generating a Physical Stability Index (PSI) as a single metric to represent an individual’s stability. The system focuses on a temporal analysis of gesture patterns while incorporating placeholders for speed scores to demonstrate its potential for a comprehensive stability assessment. The performance of various machine learning and deep learning models for gesture-based classification is evaluated, with neural network architectures such as Transformer, CNN, and KAN achieving perfect scores in recall, accuracy, precision, and F1-score. Traditional machine learning models such as XGBoost show strong results, offering a balance between computational efficiency and accuracy. The choice of model depends on specific application requirements, including real-time constraints and available resources. The preliminary experimental results indicate that the proposed GPSCRS can effectively detect changes in stability under real-time conditions, highlighting its potential for use in remote health monitoring, fall prevention, and rehabilitation scenarios. By providing a quantitative measure of stability, the system enables early risk identification and supports tailored interventions for improved mobility and quality of life. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

18 pages, 3114 KB  
Article
A Novel Empirical-Informed Neural Network Method for Vehicle Tire Noise Prediction
by Peisong Dai, Ruxue Dai, Yingqi Yin, Jingjing Wang, Haibo Huang and Weiping Ding
Machines 2025, 13(10), 911; https://doi.org/10.3390/machines13100911 - 2 Oct 2025
Abstract
In the evaluation of vehicle noise, vibration and harshness (NVH) performance, interior noise control is the core consideration. In the early stage of automobile research and development, accurate prediction of interior noise caused by road surface is very important for optimizing NVH performance [...] Read more.
In the evaluation of vehicle noise, vibration and harshness (NVH) performance, interior noise control is the core consideration. In the early stage of automobile research and development, accurate prediction of interior noise caused by road surface is very important for optimizing NVH performance and shortening the development cycle. Although the data-driven machine learning method has been widely used in automobile noise research due to its advantages of no need for accurate physical modeling, data learning and generalization ability, it still faces the challenge of insufficient accuracy in capturing key local features, such as peaks, in practical NVH engineering. Aiming at this challenge, this paper introduces a forecast approach that utilizes an empirical-informed neural network, which aims to integrate a physical mechanism and a data-driven method. By deeply analyzing the transmission path of interior noise, this method embeds the acoustic mechanism features such as local peak and noise correlation into the deep neural network as physical constraints; therefore, this approach significantly enhances the model’s predictive performance. Experimental findings indicate that, in contrast to conventional deep learning techniques, this method is able to develop better generalization capabilities with limited samples, while still maintaining prediction accuracy. In the verification of specific models, this method shows obvious advantages in prediction accuracy and computational efficiency, which verifies its application value in practical engineering. The main contributions of this study are the proposal of an empirical-informed neural network that embeds vibro-acoustic mechanisms into the loss function and the introduction of an adaptive weight strategy to enhance model robustness. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

24 pages, 2293 KB  
Article
The Path Towards Decarbonization: The Role of Hydropower in the Generation Mix
by Fabio Massimo Gatta, Alberto Geri, Stefano Lauria, Marco Maccioni and Ludovico Nati
Energies 2025, 18(19), 5248; https://doi.org/10.3390/en18195248 - 2 Oct 2025
Abstract
The evolution of the generation mix towards deep decarbonization poses pressing questions about the role of hydropower and its possible share in the future mix. Most technical–economic analyses of deeply decarbonized systems either rule out hydropower growth due to lack of additional hydro [...] Read more.
The evolution of the generation mix towards deep decarbonization poses pressing questions about the role of hydropower and its possible share in the future mix. Most technical–economic analyses of deeply decarbonized systems either rule out hydropower growth due to lack of additional hydro resources or take it into account in terms of additional reservoir capacity. This paper analyzes a generation mix made of photovoltaic, wind, open-cycle gas turbines, electrochemical storage and hydroelectricity, focusing on the optimal generation mix’s reaction to different methane gas prices, hydroelectricity availabilities, pumped hydro reservoir capacities, and mean filling durations for hydro reservoirs. The key feature of the developed model is the sizing of both optimal peak power and reservoir energy content for hydropower. The results of the study point out two main insights. The first one, rather widely accepted, is that cost-effective decarbonization requires the greatest possible amount of hydro reservoirs. The second one is that, even in the case of totally exploited reservoirs, there is a strong case for increasing hydro peak power. Application of the model to the Italian generation mix (with 9500 MWp and 7250 MWp of non-pumped and pumped hydro fleets, respectively) suggests that it is possible to achieve methane shares of less than 10% if the operating costs of open-cycle gas turbines exceed 160 EUR/MWh and with non-pumped and pumped hydro fleets of at least 9200 MWp and 28,400 MWp, respectively. Full article
Show Figures

Figure 1

Back to TopTop