Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (258)

Search Parameters:
Keywords = continuous-output neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 1550 KB  
Article
Valuation of New Carbon Asset CCER
by Hua Tang, Jiayi Wang, Yue Liu, Hanxiao Li and Boyan Zou
Sustainability 2026, 18(2), 940; https://doi.org/10.3390/su18020940 - 16 Jan 2026
Viewed by 155
Abstract
As a critical carbon offset mechanism, China’s Certified Emission Reduction (CCER) plays a pivotal role in achieving the “dual carbon” targets. With the relaunch of its trading market, refining the CCER valuation framework has become imperative. This study develops a multidimensional CCER valuation [...] Read more.
As a critical carbon offset mechanism, China’s Certified Emission Reduction (CCER) plays a pivotal role in achieving the “dual carbon” targets. With the relaunch of its trading market, refining the CCER valuation framework has become imperative. This study develops a multidimensional CCER valuation methodology based on both the income and market approaches. Under the income approach, two probabilistic models—discrete and continuous emission distribution frameworks—are proposed to quantify CCER value. Under the market approach, a Geometric Brownian Motion (GBM) model and a Long Short-Term Memory (LSTM) neural network model are constructed to capture nonlinear temporal dynamics in CCER pricing. Through a systematic comparative analysis of the outputs and methodologies of these models, this study identifies optimal pricing strategies to enhance CCER valuation. Results reveal significant disparities among models in predictive accuracy, computational efficiency, and adaptability to market dynamics. Each model exhibits distinct strengths and limitations, necessitating scenario-specific selection based on data availability, application context, and timeliness requirements to strike a balance between precision and efficiency. These findings offer both theoretical and practical insights to support the development of the CCER market. Full article
(This article belongs to the Special Issue Sustainable Development: Integrating Economy, Energy and Environment)
Show Figures

Figure 1

30 pages, 4344 KB  
Article
HAGEN: Unveiling Obfuscated Memory Threats via Hierarchical Attention-Gated Explainable Networks
by Mahmoud E. Farfoura, Mohammad Alia and Tee Connie
Electronics 2026, 15(2), 352; https://doi.org/10.3390/electronics15020352 - 13 Jan 2026
Viewed by 208
Abstract
Memory resident malware, particularly fileless and heavily obfuscated types, continues to pose a major problem for endpoint defense tools, as these threats often slip past traditional signature-based detection techniques. Deep learning has shown promise in identifying such malicious activity, but its use in [...] Read more.
Memory resident malware, particularly fileless and heavily obfuscated types, continues to pose a major problem for endpoint defense tools, as these threats often slip past traditional signature-based detection techniques. Deep learning has shown promise in identifying such malicious activity, but its use in real Security Operations Centers (SOCs) is still limited because the internal reasoning of these neural network models is difficult to interpret or verify. In response to this challenge, we present HAGEN, a hierarchical attention architecture designed to combine strong classification performance with explanations that security analysts can understand and trust. HAGEN processes memory artifacts through a series of attention layers that highlight important behavioral cues at different scales, while a gated mechanism controls how information flows through the network. This structure enables the system to expose the basis of its decisions rather than simply output a label. To further support transparency, the final classification step is guided by representative prototypes, allowing predictions to be related back to concrete examples learned during training. When evaluated on the CIC-MalMem-2022 dataset, HAGEN achieved 99.99% accuracy in distinguishing benign programs from major malware classes such as spyware, ransomware, and trojans, all with modest computational requirements suitable for live environments. Beyond accuracy, HAGEN produces clear visual and numeric explanations—such as attention maps and prototype distances—that help investigators understand which memory patterns contributed to each decision, making it a practical tool for both detection and forensic analysis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

21 pages, 30287 KB  
Article
Online Estimation of Lithium-Ion Battery State of Charge Using Multilayer Perceptron Applied to an Instrumented Robot
by Kawe Monteiro de Souza, José Rodolfo Galvão, Jorge Augusto Pessatto Mondadori, Maria Bernadete de Morais França, Paulo Broniera Junior and Fernanda Cristina Corrêa
Batteries 2026, 12(1), 25; https://doi.org/10.3390/batteries12010025 - 10 Jan 2026
Viewed by 225
Abstract
Electric vehicles (EVs) rely on a battery pack as their primary energy source, making it a critical component for their operation. To guarantee safe and correct functioning, a Battery Management System (BMS) is employed, which uses variables such as State of Charge (SOC) [...] Read more.
Electric vehicles (EVs) rely on a battery pack as their primary energy source, making it a critical component for their operation. To guarantee safe and correct functioning, a Battery Management System (BMS) is employed, which uses variables such as State of Charge (SOC) to set charge/discharge limits and to monitor pack health. In this article, we propose a Multilayer Perceptron (MLP) network to estimate the SOC of a 14.8 V battery pack installed in a robotic vacuum cleaner. Both offline and online (real-time) tests were conducted under continuous load and with rest intervals. The MLP’s output is compared against two commonly used approaches: NARX (Nonlinear Autoregressive Exogenous) and CNN (Convolutional Neural Network). Performance is evaluated via statistical metrics, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE), and we also assess computational cost using Operational Intensity. Finally, we map these results onto a Roofline Model to predict how the MLP would perform on an automotive-grade microcontroller unit (MCU). A generalization analysis is performed using Transfer Learning and optimization using MLP–Kalman. The best performers are the MLP–Kalman network, which achieved an RMSE of approximately 13% relative to the true SOC, and NARX, which achieved approximately 12%. The computational cost of both is very close, making it particularly suitable for use in BMS. Full article
(This article belongs to the Section Battery Performance, Ageing, Reliability and Safety)
Show Figures

Graphical abstract

17 pages, 3971 KB  
Article
A Hybrid LSTM-UDP Model for Real-Time Motion Prediction and Transmission of a 10,000-TEU Container Ship
by Qizhen Yu, Xiyu Liao, Jun Xu, Yicheng Lian and Zhanyang Chen
J. Mar. Sci. Eng. 2026, 14(1), 101; https://doi.org/10.3390/jmse14010101 - 4 Jan 2026
Viewed by 181
Abstract
For various specialized maritime operations, predicting the future motion responses of structures is essential. For example, ship-borne helicopter landings require a predictable time frame of 6 to 8 s, while avoiding risks during ship navigation in waves calls for a 15-s prediction window. [...] Read more.
For various specialized maritime operations, predicting the future motion responses of structures is essential. For example, ship-borne helicopter landings require a predictable time frame of 6 to 8 s, while avoiding risks during ship navigation in waves calls for a 15-s prediction window. In this work, a real-time prediction method of future ship motions using the Long Short-Term Memory Neural Network (LSTM) is introduced. A direct multi-step output approach is used to continually update with the most recent data for prediction. This method can model the nonlinear time series of ship motions leveraging LSTM’s capabilities, and User Datagram Protocol (UDP) is used between devices to achieve low-latency data transfer. The performance of this framework is demonstrated and validated through multi-degree-of-freedom motion simulations of a 10,000-TEU container ship model in random waves. The results show that all the values of R2 in the four cases are greater than 0.7, and the maximum and minimum values of R2 correspond to predictable time scales of 6 s in Case I and 10 s in Case IV, respectively. This indicates that combining LSTM neural networks with the UDP protocol allows for accurate and efficient predictions and data transmission, and the calculating accuracy of the method decreases as the predictable time scale increases. Full article
(This article belongs to the Special Issue Intelligent Solutions for Marine Operations)
Show Figures

Figure 1

23 pages, 3509 KB  
Article
Digital Twin-Based MPC for Industrial MIMO Automation: Intelligent Algorithms
by Batyrbek Suleimenov, Olga Shiryayeva and Dmitriy Gorbunov
Automation 2026, 7(1), 8; https://doi.org/10.3390/automation7010008 - 1 Jan 2026
Viewed by 330
Abstract
This study proposes an intelligent control algorithm for multiple-input multiple-output (MIMO) industrial processes. This algorithm is based on the integration of a digital twin (DT), model predictive control (MPC), a genetic algorithm (GA), and a neural network (NN). The developed architecture employs a [...] Read more.
This study proposes an intelligent control algorithm for multiple-input multiple-output (MIMO) industrial processes. This algorithm is based on the integration of a digital twin (DT), model predictive control (MPC), a genetic algorithm (GA), and a neural network (NN). The developed architecture employs a hybrid MPC scheme incorporating an additional NN correction branch. The workflow includes input data pre-processing, operating point linearization and NN training, computation of the optimal control sequence over a receding horizon, closed-loop control and adaptation based on prediction error. This innovative hybrid control law uses a linear state-space model as the base predictor and a compact NN superstructure to compensate for unmodeled nonlinearities. The GA searches for the optimal sequence of control actions while respecting process constraints and ensuring stable use of the NN correction. The methodology was tested on a phosphoric acid purification process. Compared to baseline MPC, the proposed algorithm increased purification efficiency to 95.1%, reduced the integral tracking error by 11.4%, and decreased the control signal amplitude by 10–15%. Selecting the appropriate reagent supply and vacuum modes ensured stable operation despite fluctuations in the raw material. These results confirm the effectiveness of DT-based hybrid control in applications requiring precision, adaptability, and strict constraint compliance. The approach is scalable and can be applied to other continuous production systems within Industry 4.0 initiatives. Full article
Show Figures

Figure 1

17 pages, 3138 KB  
Article
Optimization of the Z-Profile Feature Structure of a Recirculation Combustion Chamber Based on Machine Learning
by Jiaxiao Yi, Yuang Liu, Yilin Ye and Weihua Yang
Aerospace 2026, 13(1), 45; https://doi.org/10.3390/aerospace13010045 - 31 Dec 2025
Viewed by 198
Abstract
With the increasing power output of aero-engines, combustor hot-gas mass flow rate and temperature continue to rise, posing more severe challenges to combustor structural cooling design. To enhance the film-cooling performance of the Z-profile feature in a reverse-flow combustor, this study performs a [...] Read more.
With the increasing power output of aero-engines, combustor hot-gas mass flow rate and temperature continue to rise, posing more severe challenges to combustor structural cooling design. To enhance the film-cooling performance of the Z-profile feature in a reverse-flow combustor, this study performs a multi-parameter numerical optimization by integrating computational fluid dynamics (CFD), a radial basis function neural network (RBFNN), and a genetic algorithm (GA). The hole inclination angle, hole pitch, row spacing, and the distance between the first-row holes and the hot-side wall are selected as design variables, and the area-averaged adiabatic film-cooling effectiveness over a critical downstream region is adopted as the optimization objective. The RBFNN surrogate model trained on 750 CFD samples exhibits high predictive accuracy (correlation coefficient (R > 0.999)). The GA converges after approximately 50 generations and identifies an optimal configuration (Opt C). Numerical results indicate that Opt C produces more favorable vortex organization and near-wall flow characteristics, thereby achieving superior cooling performance in the target region; its average adiabatic film-cooling effectiveness is improved by 7.01% and 9.64% relative to the reference configurations Ref D and Ref E, respectively. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

28 pages, 6148 KB  
Article
A Fault Diagnosis Method for Pump Station Units Based on CWT-MHA-CNN Model for Sustainable Operation of Inter-Basin Water Transfer Projects
by Hongkui Ren, Tao Zhang, Qingqing Tian, Hongyu Yang, Yu Tian, Lei Guo and Kun Ren
Sustainability 2025, 17(24), 11383; https://doi.org/10.3390/su172411383 - 18 Dec 2025
Viewed by 344
Abstract
Inter-basin water transfer projects are core infrastructure for achieving sustainable water resource allocation and addressing regional water scarcity, and pumping station units, as their critical energy-consuming and operation-controlling components, are vital to the projects’ sustainable performance. With the growing complexity and scale of [...] Read more.
Inter-basin water transfer projects are core infrastructure for achieving sustainable water resource allocation and addressing regional water scarcity, and pumping station units, as their critical energy-consuming and operation-controlling components, are vital to the projects’ sustainable performance. With the growing complexity and scale of these projects, pumping station units have become more intricate, leading to a gradual rise in failure rates. However, existing fault diagnosis methods are relatively backward, failing to promptly detect potential faults—this not only threatens operational safety but also undermines sustainable development goals: equipment failures cause excessive energy consumption (violating energy efficiency requirements for sustainability), unplanned downtime disrupts stable water supply (impairing reliable water resource access), and even leads to water waste or environmental risks. To address this sustainability-oriented challenge, this paper focuses on the fault characteristics of pumping station units and proposes a comprehensive and accurate fault diagnosis model, aiming to enhance the sustainability of water transfer projects through technical optimization. The model utilizes advanced algorithms and data processing technologies to accurately identify fault types, thereby laying a technical foundation for the low-energy, reliable, and sustainable operation of pumping stations. Firstly, continuous wavelet transform (CWT) converts one-dimensional time-domain signals into two-dimensional time-frequency graphs, visually displaying dynamic signal characteristics to capture early fault features that may cause energy waste. Next, the multi-head attention mechanism (MHA) segments the time-frequency graphs and correlates feature-location information via independent self-attention layers, accurately capturing the temporal correlation of fault evolution—this enables early fault warning to avoid prolonged inefficient operation and energy loss. Finally, the improved convolutional neural network (CNN) layer integrates feature information and temporal correlation, outputting predefined fault probabilities for accurate fault determination. Experimental results show the model effectively solves the difficulty of feature extraction in pumping station fault diagnosis, considers fault evolution timeliness, and significantly improves prediction accuracy and anti-noise performance. Comparative experiments with three existing methods verify its superiority. Critically, this model strengthens sustainability in three key ways: (1) early fault detection reduces unplanned downtime, ensuring stable water supply (a core sustainable water resource goal); (2) accurate fault localization cuts unnecessary maintenance energy consumption, aligning with energy-saving requirements; (3) reduced equipment failure risks minimize water waste and environmental impacts. Thus, it not only provides a new method for pumping station fault diagnosis but also offers technical support for the sustainable operation of water conservancy infrastructure, contributing to global sustainable development goals (SDGs) related to water and energy. Full article
Show Figures

Figure 1

21 pages, 2676 KB  
Article
Digital Twin-Enabled Distributed Robust Scheduling for Park-Level Integrated Energy Systems
by Xiao Chang, Shengwen Li, Qiang Wang, Liang Ji and Bitian Huang
Energies 2025, 18(24), 6471; https://doi.org/10.3390/en18246471 - 10 Dec 2025
Viewed by 251
Abstract
With the deepening of multi-energy coupling and the integration of high proportions of renewable energy, the Park Integrated Energy System (PIES) 1demonstrates enhanced energy utilization flexibility. However, the random fluctuations in photovoltaic (PV) output also pose new challenges for system dispatch. Existing distributed [...] Read more.
With the deepening of multi-energy coupling and the integration of high proportions of renewable energy, the Park Integrated Energy System (PIES) 1demonstrates enhanced energy utilization flexibility. However, the random fluctuations in photovoltaic (PV) output also pose new challenges for system dispatch. Existing distributed robust scheduling approaches largely rely on offline predictive models and therefore lack dynamic correction mechanisms that incorporate real-time operational data. Moreover, the initial probability distribution of PV output is often difficult to obtain accurately, which further degrades scheduling performance. To address these limitations, this paper develops a PV digital twin model capable of providing more accurate and continuously updated initial probability distributions of PV output for distributed robust scheduling in PIESs. Building upon this foundation, this paper proposes a distributed robust scheduling method for the PIES based on digital twins. This approach aims to maximize the flexibility of energy utilization in PIESs and overcome the challenges posed by random fluctuations in PV output to PIES operational scheduling. First, a PIES model is established after investigating a park-level practical integrated energy system. To describe the uncertainty of PV output, a PV digital twin model that incorporates historical data and temporal features is developed. The long short-term memory (LSTM) neural network is employed for output prediction, and real-time data are integrated for dynamic correction. On this basis, error perturbations are introduced, and PV scenario generation and reduction are carried out using Latin hypercube sampling and k-means clustering. To achieve multi-energy cascade utilization, the objective of optimization is defined as the minimization of the sum of system operating cost and curtailment cost. To this end, a two-stage distributed robust optimization model is constructed. The optimal scheduling scheme was obtained by solving the problem using the column-and-constraint generation (CCG) algorithm. The proposed method was finally validated through a case study involving an actual industrial park. The findings indicate that the constructed digital twin model achieves a significant improvement in prediction accuracy compared to traditional models, with the root mean square error and mean absolute error reduced by 13.3% and 10.81%, respectively. Furthermore, the proposed distributed robust scheduling strategy significantly enhances the operational economics of PIESs while maintaining system robustness, compared to conventional methods, thereby demonstrating its practical application value in PIES scheduling. Full article
Show Figures

Figure 1

25 pages, 2845 KB  
Article
Power Quality Data Augmentation and Processing Method for Distribution Terminals Considering High-Frequency Sampling
by Ruijiang Zeng, Zhiyong Li, Haodong Liu, Wenxuan Che, Jiamu Yang, Sifeng Li and Zhongwei Sun
Energies 2025, 18(24), 6426; https://doi.org/10.3390/en18246426 - 9 Dec 2025
Viewed by 231
Abstract
The safe and stable operation of distribution networks relies on the real-time monitoring, analysis, and feedback of power quality data. However, with the continuous advancement of distribution network construction, the number of distributed power electronic devices has increased significantly, leading to frequent power [...] Read more.
The safe and stable operation of distribution networks relies on the real-time monitoring, analysis, and feedback of power quality data. However, with the continuous advancement of distribution network construction, the number of distributed power electronic devices has increased significantly, leading to frequent power quality issues such as voltage fluctuations, harmonic pollution, and three-phase unbalance in distribution terminals. Therefore, the augmentation and processing of power quality data have become crucial for ensuring the stable operation of distribution networks. Traditional methods for augmenting and processing power quality data fail to consider the differentiated characteristics of burrs in signal sequences and neglect the comprehensive consideration of both time-domain and frequency-domain features in disturbance identification. This results in the distortion of high-frequency fault information, and insufficient robustness and accuracy in identifying Power Quality Disturbance (PQD) against the complex noise background of distribution networks. In response to these issues, we propose a power quality data augmentation and processing method for distribution terminals considering high-frequency sampling. Firstly, a burr removal method of the sampling waveform based on a high-frequency filter operator is proposed. By comprehensively considering the characteristics of concavity and convexity in both burr and normal waveforms, a high-frequency filtering operator is introduced. Additional constraints and parameters are applied to suppress sequences with burr characteristics, thereby accurately eliminating burrs while preserving the key features of valid information. This approach avoids distortion of high-frequency fault information after filtering, which supports subsequent PQD identification. Secondly, a PQD identification method based on a dual-channel time–frequency feature fusion network is proposed. The PQD signals undergo an S-transform and period reconfiguration to construct matrix image features in the time–frequency domain. Finally, these features are input into a Convolutional Neural Network (CNN) and a Transformer encoder to extract highly coupled global features, which are then fused through a cross-attention mechanism. The identification results of PQD are output through a classification layer, thereby enhancing the robustness and accuracy of disturbance identification against the complex noise background of distribution networks. Simulation results demonstrate that the proposed algorithm achieves optimal burr removal and disturbance identification accuracy. Full article
Show Figures

Figure 1

27 pages, 56691 KB  
Article
MalVis: Large-Scale Bytecode Visualization Framework for Explainable Android Malware Detection
by Saleh J. Makkawy, Michael J. De Lucia and Kenneth E. Barner
J. Cybersecur. Priv. 2025, 5(4), 109; https://doi.org/10.3390/jcp5040109 - 4 Dec 2025
Viewed by 641
Abstract
As technology advances, developers continually create innovative solutions to enhance smartphone security. However, the rapid spread of Android malware poses significant threats to devices and sensitive data. The Android Operating System (OS)’s open-source nature and Software Development Kit (SDK) availability mainly contribute to [...] Read more.
As technology advances, developers continually create innovative solutions to enhance smartphone security. However, the rapid spread of Android malware poses significant threats to devices and sensitive data. The Android Operating System (OS)’s open-source nature and Software Development Kit (SDK) availability mainly contribute to this alarming growth. Conventional malware detection methods, such as signature-based, static, and dynamic analysis, face challenges in detecting obfuscated techniques, including encryption, packing, and compression, in malware. Although developers have created several visualization techniques for malware detection using deep learning (DL), they often fail to accurately identify the critical malicious features of malware. This research introduces MalVis, a unified visualization framework that integrates entropy and N-gram analysis to emphasize meaningful structural and anomalous operational patterns within the malware bytecode. By addressing significant limitations of existing visualization methods, such as insufficient feature representation, limited interpretability, small dataset sizes, and restricted data access, MalVis delivers enhanced detection capabilities, particularly for obfuscated and previously unseen (zero-day) malware. The framework leverages the MalVis dataset introduced in this work, a publicly available large-scale dataset comprising more than 1.3 million visual representations in nine malware classes and one benign class. A comprehensive comparative evaluation was performed against existing state-of-the-art visualization techniques using leading convolutional neural network (CNN) architectures, MobileNet-V2, DenseNet201, ResNet50, VGG16, and Inception-V3. To further boost classification performance and mitigate overfitting, the outputs of these models were combined using eight distinct ensemble strategies. To address the issue of imbalanced class distribution in the multiclass dataset, we employed an undersampling technique to ensure balanced learning across all types of malware. MalVis achieved superior results, with 95% accuracy, 90% F1-score, 92% precision, 89% recall, 87% Matthews Correlation Coefficient (MCC), and 98% Receiver Operating Characteristic Area Under Curve (ROC-AUC). These findings highlight the effectiveness of MalVis in providing interpretable and accurate representation features for malware detection and classification, making it valuable for research and real-world security applications. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

17 pages, 3389 KB  
Article
Dynamic Monitoring Method of Polymer Injection Molding Product Quality Based on Operating Condition Drift Detection and Incremental Learning
by Guancheng Shen, Sihong Li, Yun Zhang, Huamin Zhou and Maoyuan Li
Polymers 2025, 17(22), 3025; https://doi.org/10.3390/polym17223025 - 14 Nov 2025
Viewed by 661
Abstract
Prediction models for polymer injection molding quality often degrade due to shifts in operating conditions caused by variations in melting temperature, cooling efficiency, or machine conditions. To address this challenge, this study proposes a drift-aware dynamic quality-monitoring framework that integrates hybrid-feature autoencoder (HFAE) [...] Read more.
Prediction models for polymer injection molding quality often degrade due to shifts in operating conditions caused by variations in melting temperature, cooling efficiency, or machine conditions. To address this challenge, this study proposes a drift-aware dynamic quality-monitoring framework that integrates hybrid-feature autoencoder (HFAE) drift detection, sliding-window reconstruction error analysis, and a mixed-feature artificial neural network (ANN) for online quality prediction. First, shifts in processing parameters are rigorously quantified to uncover continuous drifts in both input and conditional output distributions. A HFAE monitors reconstruction errors within a sliding window to promptly detect anomalous deviations. Once the drift index exceeds a predefined threshold, the system automatically triggers a drift-event response, including the collection and labeling of a small batch of new samples. In benchmark tests, this adaptive scheme outperforms static models, achieving a 35.4% increase in overall accuracy. After two incremental updates, the root-mean-squared error decreases by 42.3% across different production intervals. The anomaly detection rate falls from 0.86 to 0.09, effectively narrowing the distribution gap between training and testing sets. By tightly coupling drift detection with online model adaptation, the proposed method not only maintains high-fidelity quality predictions under dynamically evolving injection molding conditions but also demonstrates practical relevance for large-scale industrial production, enabling reduced rework, improved process stability, and lower sampling frequency. Full article
(This article belongs to the Section Polymer Processing and Engineering)
Show Figures

Figure 1

17 pages, 2138 KB  
Article
Surface Electromyography-Based Wrist Angle Estimation and Robotic Arm Control with Echo State Networks
by Toshihiro Kawase and Hiroki Ikeda
Actuators 2025, 14(11), 548; https://doi.org/10.3390/act14110548 - 9 Nov 2025
Cited by 1 | Viewed by 857
Abstract
Continuous estimation of joint angles based on surface electromyography (sEMG) signals is a promising method for naturally controlling prosthetic limbs and assistive devices. However, conventional methods based on neural networks have limitations such as long training times and calibration burdens. This study investigates [...] Read more.
Continuous estimation of joint angles based on surface electromyography (sEMG) signals is a promising method for naturally controlling prosthetic limbs and assistive devices. However, conventional methods based on neural networks have limitations such as long training times and calibration burdens. This study investigates the use of an echo state network (ESN), which enables fast training, to estimate wrist joint angles from sEMG. Five participants mimicked the motion of a 1-degree-of-freedom robotic arm by flexing and extending their wrist, while sEMG signals from the wrist flexor and extensor muscles and the robotic arm’s angle were recorded. The ESN was trained to take two sEMG channels as input and the robotic joint angle as output. High-accuracy estimation with a median coefficient of determination R2 = 0.835 was achieved for representative ESN parameters. Additionally, the effects of the reservoir size, spectral radius, and time constant on estimation accuracy were evaluated using data from a single participant. Furthermore, online estimation of joint angles based on sEMG signals enabled successful control of the robotic arm. These results suggest that sEMG-based ESN estimation offers fast, accurate joint control and could be useful for prosthetics and fundamental studies on body perception. Full article
Show Figures

Figure 1

17 pages, 4362 KB  
Article
Developing Statistical and Multilayer Perceptron Neural Network Models for a Concrete Dam Dynamic Behaviour Interpretation
by Andrés Mauricio Guzmán Sejas, Sérgio Pereira, Juan Mata and Álvaro Cunha
Infrastructures 2025, 10(11), 301; https://doi.org/10.3390/infrastructures10110301 - 9 Nov 2025
Viewed by 1392
Abstract
This work focuses on the dynamic monitoring behaviour of concrete dams, with a specific emphasis on the Baixo Sabor dam as a case study. The main objective of the dynamic monitoring is to continuously observe the dam’s behaviour, ensuring it remains within expected [...] Read more.
This work focuses on the dynamic monitoring behaviour of concrete dams, with a specific emphasis on the Baixo Sabor dam as a case study. The main objective of the dynamic monitoring is to continuously observe the dam’s behaviour, ensuring it remains within expected patterns and issuing alerts if deviations occur. The monitoring process relies on on-site instruments and behaviour models that use pattern recognition, thereby avoiding explicit dependence on mechanical principles. The undertaken work aimed to develop, calibrate, and compare statistical and machine learning models to aid in interpreting the observed dynamic behaviour of a concrete dam. The methodology included several key steps: operational modal analysis of acceleration time series, characterisation of the temporal evolution of observed magnitudes and influential environmental and operational variables, construction and calibration of predictive models using both statistical and machine learning methods, and the comparison of their effectiveness. Both Multiple Linear Regression (MLR) and Multilayer Perceptron Neural Network (MLP-NN) models were developed and tested. This work emphasised the development of several MLP-NN architectures. MLP-NN models with one and two hidden layers, and with one or more outputs in the output layer, were performed. The aim of this work is to assess the performance of MLP-NN models with different numbers of units in the output layer, in order to understand the advantages and disadvantages of having multiple models that characterise the observed behaviour of a single quantity or a single MLP-NN model that simultaneously learns and characterises the observed behaviour for multiple quantities. The results showed that while both MLR and MLP-NN models effectively captured and predicted the dam’s behaviour, the neural network slightly outperformed the regression model in prediction accuracy. However, the linear regression model is easier to interpret. In conclusion, both methods of linear regression and neural network models are suitable for the analysis and interpretation of monitored dynamic behaviour, but there are advantages in adopting a single model that considers all quantities simultaneously. For large-scale projects like the Baixo Sabor dam, Multilayer Perceptron Neural Networks offer significant advantages in handling intricate data relationships, thus providing better insights into the dam’s dynamic behaviour. Full article
(This article belongs to the Special Issue Preserving Life Through Dams)
Show Figures

Figure 1

26 pages, 7058 KB  
Article
Geo-PhysNet: A Geometry-Aware and Physics-Constrained Graph Neural Network for Aerodynamic Pressure Prediction on Vehicle Fluid–Solid Surfaces
by Bowen Liu, Hao Wang, Liheng Xue and Yin Long
Appl. Sci. 2025, 15(21), 11645; https://doi.org/10.3390/app152111645 - 31 Oct 2025
Viewed by 781
Abstract
The aerodynamic pressure of a car is crucial for its shape design. To overcome the time-consuming and costly bottleneck of wind tunnel tests and computational fluid dynamics (CFD) simulations, deep learning-based surrogate models have emerged as highly promising alternatives. However, existing methods that [...] Read more.
The aerodynamic pressure of a car is crucial for its shape design. To overcome the time-consuming and costly bottleneck of wind tunnel tests and computational fluid dynamics (CFD) simulations, deep learning-based surrogate models have emerged as highly promising alternatives. However, existing methods that only predict on the surface of objects only learn the mapping of pressure. In contrast, a physically realistic field has values and gradients that are structurally unified and self-consistent. Therefore, existing methods ignore the crucial differential structure and intrinsic continuity of the physical field as a whole. This oversight leads to their predictions, even if locally numerically close, often showing unrealistic gradient distributions and high-frequency oscillations macroscopically, greatly limiting their reliability and practicality in engineering decisions. To address this, this study proposes the Geo-PhysNet model, a graph neural network framework specifically designed for complex surface manifolds with strong physical constraints. This framework learns a differential representation, and its network architecture is designed to simultaneously predict the pressure scalar field and its tangential gradient vector field on the surface manifold within a unified framework. By making the gradient an explicit learning target, we force the network to understand the local mechanical causes leading to pressure changes, thereby mathematically ensuring the self-consistency of the field’s intrinsic structure, rather than merely learning the numerical mapping of pressure. Finally, to solve the common noise problem in the predictions of existing methods, we introduce a physical regularization term based on the surface Laplacian operator to penalize non-smooth solutions, ensuring the physical rationality of the final output field. Experimental verification results show that Geo-PhysNet not only outperforms existing benchmark models in numerical accuracy but, more importantly, demonstrates superior advantages in the physical authenticity, field continuity, and gradient smoothness of the generated pressure fields. Full article
Show Figures

Figure 1

22 pages, 4342 KB  
Article
Cloud-Based Personalized sEMG Classification Using Lightweight CNNs for Long-Term Haptic Communication in Deaf-Blind Individuals
by Kaavya Tatavarty, Maxwell Johnson and Boris Rubinsky
Bioengineering 2025, 12(11), 1167; https://doi.org/10.3390/bioengineering12111167 - 27 Oct 2025
Viewed by 899
Abstract
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring [...] Read more.
Deaf-blindness, particularly in progressive conditions such as Usher syndrome, presents profound challenges to communication, independence, and access to information. Existing tactile communication technologies for individuals with Usher syndrome are often limited by the need for close physical proximity to trained interpreters, typically requiring hand-to-hand contact. In this study, we introduce a novel, cloud-based, AI-assisted gesture recognition and haptic communication system designed for long-term use by individuals with Usher syndrome, whose auditory and visual abilities deteriorate with age. Central to our approach is a wearable haptic interface that relocates tactile input and output from the hands to an arm-mounted sleeve, thereby preserving manual dexterity and enabling continuous, bidirectional tactile interaction. The system uses surface electromyography (sEMG) to capture user-specific muscle activations in the hand and forearm and employs lightweight, personalized convolutional neural networks (CNNs), hosted on a centralized server, to perform real-time gesture classification. A key innovation of the system is its ability to adapt over time to each user’s evolving physiological condition, including the progressive loss of vision and hearing. Experimental validation using a public dataset, along with real-time testing involving seven participants, demonstrates that personalized models consistently outperform cross-user models in terms of accuracy, adaptability, and usability. This platform offers a scalable, longitudinally adaptable solution for non-visual communication and holds significant promise for advancing assistive technologies for the deaf-blind community. Full article
(This article belongs to the Section Biosignal Processing)
Show Figures

Graphical abstract

Back to TopTop