Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,996)

Search Parameters:
Keywords = recurrent neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2769 KB  
Article
Spec-RWKV: A Spectrum-Guided Multi-Scale Recurrent Modeling Framework for Multi-Center Resting-State fMRI-Assisted Diagnosis
by Sihang Peng and Qi Xu
Brain Sci. 2026, 16(5), 455; https://doi.org/10.3390/brainsci16050455 (registering DOI) - 24 Apr 2026
Abstract
Background: Multi-center resting-state functional magnetic resonance imaging (rs-fMRI) is important for neurodevelopmental disorder diagnosis, but cross-site differences in repetition time (TR) can cause temporal feature misalignment. In addition, blood-oxygen-level-dependent (BOLD) signals are non-stationary, so disease-related information may be distributed across multiple time scales. [...] Read more.
Background: Multi-center resting-state functional magnetic resonance imaging (rs-fMRI) is important for neurodevelopmental disorder diagnosis, but cross-site differences in repetition time (TR) can cause temporal feature misalignment. In addition, blood-oxygen-level-dependent (BOLD) signals are non-stationary, so disease-related information may be distributed across multiple time scales. Existing methods usually do not explicitly model physical sampling intervals or coordinate temporal and spectral information across scales, which may limit cross-site generalization in heterogeneous multi-center settings. Methods: We propose Spec-RWKV, a spectrum-guided linear recurrent framework for multi-site rs-fMRI diagnosis. It includes three components: PrismTimeMix, which models temporal dynamics using decay rates derived from physical half-lives and converts them adaptively across TRs; a TR-adaptive continuous wavelet transform, which aligns spectral representations across sites by adjusting frequency coverage; and spectrum-guided adaptive temporal aggregation, which uses spectral context to weight temporal features. Results: On ABIDE-I and ADHD-200, Spec-RWKV achieved AUCs of 75.86% and 76.31%, respectively. Under leave-one-site-out validation, it achieved the best mean AUC on ABIDE-I and the best mean accuracy and AUC on ADHD-200. Conclusions: Spec-RWKV explicitly models sampling-rate differences and multi-scale spectral structure, with results supporting strong cross-site generalizability. Full article
26 pages, 11449 KB  
Article
Signal Intelligence: Vibration-Driven Deep Learning for Anomaly Detection of Rotary-Wing UAVs
by Alican Yilmaz, Erkan Caner Ozkat and Fatih Gul
Drones 2026, 10(5), 321; https://doi.org/10.3390/drones10050321 - 24 Apr 2026
Abstract
Unmanned aerial vehicles (UAVs) operating in safety-critical missions require effective anomaly detection methods to identify propulsion-system faults before they cause catastrophic failures. However, current vibration-based diagnostic models typically rely on datasets representing only discrete, isolated fault states, and do not capture the continuous [...] Read more.
Unmanned aerial vehicles (UAVs) operating in safety-critical missions require effective anomaly detection methods to identify propulsion-system faults before they cause catastrophic failures. However, current vibration-based diagnostic models typically rely on datasets representing only discrete, isolated fault states, and do not capture the continuous structural degradation that occurs during real flight operations. To address this gap, this study proposes a severity-ordered vibration data augmentation framework for anomaly detection in rotary-wing UAV propulsion systems. Controlled experiments were conducted under healthy, tape-induced imbalance, scratch, and cut propeller conditions using stepped throttle excitation from 10% to 100% in 10% increments, with 40 s per level. A severity-ordered arrangement strategy based on throttle level and a robust peak-to-peak severity metric generated approximately 7.5 h of augmented vibration data per axis, representing a continuous degradation trajectory. Three-axis continuous wavelet transform (CWT) scalograms of size 48×96×3 were used to train an unsupervised anomaly detection framework. Comparative experiments with Isolation Forest, One-Class SVM, and LSTM–AE demonstrated that the proposed Convolutional Neural Network (CNN)–Bidirectional Gated Recurrent Unit (BiGRU)–State-Space Model (SSM)–Autoencoder (AE) architecture achieved the best performance, reaching 0.9959 precision, 0.4428 recall, 0.6131 F1-score, and 0.9284 Area Under the Receiver Operating Characteristic Curve (AUROC). The ablation study further showed that incorporating temporal modeling and state-space dynamics improves detection robustness compared with CNN–AE and CNN–BiGRU–AE baselines. These results show that combining severity-ordered augmentation with deep temporal learning improves progressive propulsion anomaly detection in UAV vibration monitoring. This work introduces a methodology that connects rotor dynamics principles with deep learning, providing a continuous degradation manifold that improves early-stage detection and condition monitoring of UAV propulsion systems. Full article
Show Figures

Figure 1

17 pages, 11454 KB  
Article
Informer-Based Precipitation Forecasting Using Ground Station Data in Guangxi, China
by Ting Zhang, Donghong Qin, Deyi Wang, Soung-Yue Liew and Huasheng Zhao
Atmosphere 2026, 17(5), 429; https://doi.org/10.3390/atmos17050429 - 22 Apr 2026
Abstract
Precipitation forecasting is essential for disaster prevention, water resource management, and socio-economic resilience. The field has evolved from numerical weather prediction (NWP) and optical-flow-based methods toward data-driven deep learning approaches that can exploit larger observational datasets and model complex nonlinear relationships. Against this [...] Read more.
Precipitation forecasting is essential for disaster prevention, water resource management, and socio-economic resilience. The field has evolved from numerical weather prediction (NWP) and optical-flow-based methods toward data-driven deep learning approaches that can exploit larger observational datasets and model complex nonlinear relationships. Against this background, this study evaluates multi-station temporal forecasting models within a single-year, station-based proof-of-concept benchmark under unified data conditions. We adapt the Transformer and Informer architectures to this meteorological setting, rigorously preprocess the AWS dataset to avoid data leakage, and select predictive variables using complementary linear and nonlinear relevance criteria. Model performance is assessed using continuous and categorical precipitation metrics, including the Critical Success Index (CSI). The results show that the Informer outperforms the recurrent neural network (RNN) baselines and achieves the lowest mean MAE and RMSE together with the highest mean CSI among the evaluated models while using substantially fewer parameters than the standard Transformer. However, its sample-wise absolute error distribution remains statistically comparable to that of the standard Transformer. Overall, this study establishes a single-year, station-based proof-of-concept benchmark for comparing architectures in very-short-term (1–5 h ahead) precipitation forecasting. Full article
(This article belongs to the Special Issue Atmospheric Modeling with Artificial Intelligence Technologies)
Show Figures

Figure 1

6 pages, 897 KB  
Proceeding Paper
Implementation of Deep Belief Network with Sensor Correction Algorithm to Predict Weather on a Raspberry Pi
by Alaric S. Espiña, Franchesca Shieville F. Castro and Rosemarie V. Pellegrino
Eng. Proc. 2026, 134(1), 77; https://doi.org/10.3390/engproc2026134077 - 21 Apr 2026
Abstract
Weather is an essential part of life that affects livelihoods such as agriculture, aviation, etc. Existing systems for weather prediction use deep learning frameworks such as Recurrent Neural Networks and Long Short-term Memory. These models, however, suffer from vanishing gradients that affect the [...] Read more.
Weather is an essential part of life that affects livelihoods such as agriculture, aviation, etc. Existing systems for weather prediction use deep learning frameworks such as Recurrent Neural Networks and Long Short-term Memory. These models, however, suffer from vanishing gradients that affect the accuracy of the prediction. Using the Deep Belief Networks, we developed a model to address this. Historical weather data is obtained from the Philippine Atmospheric, Geophysical and Astronomical Services Administration for model training. The ground-level sensor data was used to normalize the inputs for the model. The resulting multiclass accuracy is 80%. A larger dataset is recommended for better performance. Full article
Show Figures

Figure 1

28 pages, 1008 KB  
Review
Deep Learning for Credit Risk Prediction: A Survey of Methods, Applications, and Challenges
by Ibomoiye Domor Mienye, Ebenezer Esenogho and Cameron Modisane
Information 2026, 17(4), 395; https://doi.org/10.3390/info17040395 - 21 Apr 2026
Viewed by 76
Abstract
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures [...] Read more.
Credit risk prediction is central to financial stability and regulatory compliance, guiding lending decisions and portfolio risk management. While traditional approaches such as logistic regression and tree-based models have long been the industry standard, recent advances in deep learning (DL) have introduced architectures capable of capturing complex nonlinearities, temporal dynamics, and relational dependencies in borrower data. This study provides a comprehensive review of DL methods applied to credit risk prediction, covering multi-layer perceptron, recurrent and convolutional neural networks, transformer, and graph neural networks. We examine benchmark and large-scale datasets, highlight peer-reviewed applications across corporate, consumer, and peer-to-peer lending, and evaluate the benefits of DL relative to classical machine learning. In addition, we critically assess key challenges and identify emerging opportunities. By synthesising methods, applications, and open challenges, this paper offers a roadmap for advancing trustworthy deep learning in credit risk modelling and bridging the gap between academic research and industry deployment. Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
Show Figures

Figure 1

16 pages, 880 KB  
Article
Integer-State Dynamics in Quantized Spiking Neural Networks: Implications for Hardware-Oriented Design
by Lei Zhang
Electronics 2026, 15(8), 1756; https://doi.org/10.3390/electronics15081756 - 21 Apr 2026
Viewed by 143
Abstract
Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter [...] Read more.
Spiking neural networks (SNNs) support energy-efficient machine intelligence because event-driven computation and sparse activity map naturally to low-power digital hardware. In practical implementations, however, membrane states, synaptic weights, and thresholds are represented with finite-precision integer arithmetic. Quantization, clipping, and overflow can therefore alter network dynamics rather than merely approximate a higher-precision model. This paper adopts an integer-state dynamical perspective, modeling a quantized SNN with a hardware-relevant update rule as a deterministic map on a bounded integer lattice. Rather than claiming recurrence itself as a new property, we focus on how finite-precision representation and implementation semantics shape observed recurrent regimes and activity patterns. We introduce a shift-based update rule with integer-valued states and investigate its behaviour through simulation-based analysis with network sizes N=30–130, connection densities 0.1–0.9, and bit widths 1 to 16 over T = 1000 steps. The results show bounded and recurrent temporal structure with strong quantization sensitivity. The observed regimes depend heavily on the semantics of representation and the scaling choices. These findings suggest that numerical precision can act as a dynamical design variable and provide useful implications for hardware-oriented SNN design, while motivating future work on attractor analysis and FPGA/ASIC validation. Full article
(This article belongs to the Special Issue Hardware Acceleration for Machine Learning)
Show Figures

Figure 1

25 pages, 4170 KB  
Article
Neuroevolution of Liquid State Machine Based on Neural Configurations and Positions
by Carlos-Alberto López-Herrera, Héctor-Gabriel Acosta-Mesa, Efrén Mezura-Montes and Jesús-Arnulfo Barradas-Palmeros
Math. Comput. Appl. 2026, 31(2), 65; https://doi.org/10.3390/mca31020065 - 21 Apr 2026
Viewed by 195
Abstract
Liquid State Machines (LSMs), a reservoir computing model based on recurrent spiking neural networks, provide a powerful framework for solving spatiotemporal classification tasks by leveraging rich temporal dynamics and event-driven processing. Although the traditional LSM formulation assumes a fixed, randomly generated reservoir, recent [...] Read more.
Liquid State Machines (LSMs), a reservoir computing model based on recurrent spiking neural networks, provide a powerful framework for solving spatiotemporal classification tasks by leveraging rich temporal dynamics and event-driven processing. Although the traditional LSM formulation assumes a fixed, randomly generated reservoir, recent research has explored optimization strategies to improve liquid dynamics. However, most existing approaches focus primarily on optimizing synaptic connectivity or reservoir structure, while the role of neuron-level parameters remains largely underexplored. This work proposes a neuroevolutionary strategy based on a Genetic Algorithm (GA) that encodes both neuron configurations and their spatial positions, explicitly treating neuron-level parameters as optimization targets. By evolving neuron-specific parameters and spatial positions, the method induces diverse reservoir dynamics. Unlike approaches that directly optimize synaptic weights, the proposed representation maintains an encoding whose dimensionality scales linearly with the number of neurons. The approach was evaluated on four synthetic benchmark tasks, including one Frequency Recognition task and three Pattern Recognition tasks, using compact reservoirs composed of only 20 Leaky Integrate-and-Fire neurons. Despite the small reservoir size, the method achieved state-of-the-art or highly competitive performance, reaching mean accuracies of up to 99.71%. In the most challenging case (PR12), performance improved when the reservoir size was increased to 64 neurons. The method was further evaluated on two real-world datasets, N-MNIST and the Free Spoken Digit Dataset (FSDD), using reservoirs of 300 neurons, achieving 90.65% and 81.47% accuracy, respectively, while using substantially fewer neurons than many existing LSM-based approaches. These results highlight the potential of evolving neuron configurations and spatial organization to produce compact and effective liquid reservoirs. Full article
(This article belongs to the Special Issue New Trends in Computational Intelligence and Applications 2025)
Show Figures

Figure 1

26 pages, 3271 KB  
Article
Comparative Evaluation of Deep-Learning and SARIMA Models for Short-Term Residential PV Power Forecasting
by Kalsoom Bano, Vishnu Suresh, Francesco Montana and Przemyslaw Janik
Energies 2026, 19(8), 1991; https://doi.org/10.3390/en19081991 - 20 Apr 2026
Viewed by 146
Abstract
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power [...] Read more.
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power generation is investigated using real-world data collected from multiple households within an Irish energy community. Several deep-learning architectures, including long short-term memory (LSTM), gated recurrent unit (GRU), convolutional neural networks (CNN), CNN–LSTM hybrid networks, and attention-based LSTM models, are evaluated and compared with a seasonal autoregressive integrated moving average (SARIMA) statistical model. A sliding-window approach is employed to transform the PV time series into a supervised learning problem. To ensure statistical robustness, deep-learning models are evaluated using a multi-run framework, and results are reported as mean ± standard deviation based on MAE, RMSE, MAPE, and R2 metrics across multiple households. The results indicate that deep-learning models achieve consistently strong forecasting performance, with GRU frequently providing the most reliable predictions across several households. For instance, in House 5, GRU achieved an RMSE of 142.02 ± 1.87 W and an R2 of 0.694 ± 0.008, while in Houses 11 and 13 it attained R2 values of 0.837 ± 0.002 and 0.835 0.08, respectively. However, performance varied across households, reflecting the influence of data variability and generation patterns on model effectiveness. In comparison, the SARIMA model demonstrated competitive performance and, in certain cases, outperformed deep-learning models. For example, in House 4, it achieved the lowest RMSE of 90.68 W and the highest R2 of 0.709. Overall, these findings highlight that while deep-learning models offer greater adaptability and stability, statistical models remain effective for more regular PV generation patterns. Consequently, the study emphasizes the importance of evaluating forecasting models under realistic household-level conditions and demonstrates that both deep-learning and statistical approaches can provide short-term PV forecasting. Full article
Show Figures

Figure 1

21 pages, 3042 KB  
Article
Prediction of Rice and Wheat Cultivation Regions of Chongming Island Using Time-Series Sentinel-1A SAR Images
by Hanlin Zhang, Bo Zheng, Jieqiu Wang and Shaoming Zhang
Remote Sens. 2026, 18(8), 1248; https://doi.org/10.3390/rs18081248 - 20 Apr 2026
Viewed by 209
Abstract
Accurate identification of cultivated land planting types is essential for agricultural resource management and national food security. Traditional optical remote sensing approaches are susceptible to weather interference in cloudy regions, making continuous crop growth monitoring challenging to achieve. To address this limitation, this [...] Read more.
Accurate identification of cultivated land planting types is essential for agricultural resource management and national food security. Traditional optical remote sensing approaches are susceptible to weather interference in cloudy regions, making continuous crop growth monitoring challenging to achieve. To address this limitation, this study proposes a crop classification framework based on time-series Sentinel-1A SAR imagery combined with Recurrent Neural Networks (RNN), using Chongming Island, Shanghai as the experimental area. The framework integrates backscattering coefficients (VV, VH, VV/VH ratio) with polarimetric decomposition parameters (entropy H, scattering angle alpha, anisotropy A) as multi-dimensional temporal input features, and employs decision-level voting to obtain plot-level classification results. Experiments on three classification tasks (Rice versus Non-Rice, Wheat versus Non-Wheat, and multi-class rotation patterns) demonstrate that the proposed method achieves pixel-level accuracies of 99.72%, 99.60%, and 98.39% respectively using the six-dimensional BSPD model, with plot-level F1 scores exceeding 0.990 across all tasks. The fusion of polarimetric decomposition features reduces classification errors by up to 70% compared with backscattering-only features, particularly improving discrimination of phenologically overlapping crop categories. These results confirm that multi-dimensional temporal features extracted from dense time-series SAR imagery significantly enhance crop classification accuracy in all-weather conditions. Full article
Show Figures

Figure 1

31 pages, 5699 KB  
Article
Evaluating Neural Networks Architectures for Competency Prediction from Process Data Using PISA Computer-Based Mathematics Assessment
by Huan Kuang
J. Intell. 2026, 14(4), 70; https://doi.org/10.3390/jintelligence14040070 - 20 Apr 2026
Viewed by 113
Abstract
Computer-based assessments generate rich process data that captures examinees’ interactions with test items. Using process data from the U.S. PISA 2012 computer-based mathematics assessment sample, this study applied recurrent neural networks to predict item-level correctness and assessment-level latent proficiency. The analysis also examines [...] Read more.
Computer-based assessments generate rich process data that captures examinees’ interactions with test items. Using process data from the U.S. PISA 2012 computer-based mathematics assessment sample, this study applied recurrent neural networks to predict item-level correctness and assessment-level latent proficiency. The analysis also examines the impact of expert-engineered features, levels of architectural complexity, action variability, and score variability on model performance. At the item level, most models achieved AUC values around 0.80, indicating good predictive performance. Moderate correlations were observed between latent proficiency from 30 items and predictions based on process data from a subset of items (n = 10). For item-level models, adding expert-engineered features reduces training time and may improve predictive performance with low action variability. For the assessment-level models, adding expert-engineered features improved performance. Model complexity, including model type (i.e., standard RNN, GRU, and LSTM), number of nodes, and number of layers, had little effect on accuracy and efficiency. Moreover, items with greater action variability were associated with better model performance. The findings suggest that simple neural network architectures are sufficient for modeling process data with limited action variability and that combining action sequences with expert-engineered features improves accuracy, efficiency, and interpretability. Full article
Show Figures

Figure 1

24 pages, 4858 KB  
Article
Reconstructing Shallow River Bathymetry Through Sequence-Based Modeling Approach
by Modestas Butnorius, Timas Akelis, Matas Vaitkevičius, Dominykas Matulis, Andrius Kriščiūnas, Vytautas Akstinas and Rimantas Barauskas
Water 2026, 18(8), 975; https://doi.org/10.3390/w18080975 - 20 Apr 2026
Viewed by 226
Abstract
Hydrological monitoring is crucial for protecting aquatic ecosystems, especially downstream of hydropower plants where water levels can change suddenly and cause the degradation of instream habitats. There are lot of traditional methods used to monitor water levels and river bathymetry, but most of [...] Read more.
Hydrological monitoring is crucial for protecting aquatic ecosystems, especially downstream of hydropower plants where water levels can change suddenly and cause the degradation of instream habitats. There are lot of traditional methods used to monitor water levels and river bathymetry, but most of them rely on in situ measurements. Drone-based remote sensing has received more attention in recent years, with the data in turn processed using CNNs. In this paper, we propose a new sequence-based method that uses multiple frames to expand the available context and compare it to already existing methods, such as Lyzenga, Stumpf, CNN, and SfM. The best performing models within this study end up being SfM and CNN, with the former being more accurate on rivers with clean riverbeds and the latter being the most consistent. The sequence-based model shows promise, and even outperforms CNN, in terms of MAE, on rivers where the same location across multiple views is mapped, achieving the most accurate results across different images. This shows that utilizing multiple views to increase the available context can improve the accuracy of riverine depth estimation based on multispectral visual information. Full article
Show Figures

Figure 1

27 pages, 3995 KB  
Article
Video-Based Arabic Sign Language Recognition with Mediapipe and Deep Learning Techniques
by Dana El-Rushaidat, Nour Almohammad, Raine Yeh and Kinda Fayyad
J. Imaging 2026, 12(4), 177; https://doi.org/10.3390/jimaging12040177 - 20 Apr 2026
Viewed by 230
Abstract
This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile [...] Read more.
This paper addresses the critical communication barrier experienced by deaf and hearing-impaired individuals in the Arab world through the development of an affordable, video-based Arabic Sign Language (ArSL) recognition system. Designed for broad accessibility, the system eliminates specialized hardware by leveraging standard mobile or laptop cameras. Our methodology employs Mediapipe for real-time extraction of hand, face, and pose landmarks from video streams. These anatomical features are then processed by a hybrid deep learning model integrating Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), specifically Bidirectional Long Short-Term Memory (BiLSTM) layers. The CNN component captures spatial features, such as intricate hand shapes and body movements, within individual frames. Concurrently, BiLSTMs model long-term temporal dependencies and motion trajectories across consecutive frames. This integrated CNN-BiLSTM architecture is critical for generating a comprehensive spatiotemporal representation, enabling accurate differentiation of complex signs where meaning relies on both static gestures and dynamic transitions, thus preventing misclassification that CNN-only or RNN-only models would incur. Rigorously evaluated on the author-created JUST-SL dataset and the publicly available KArSL dataset, the system achieved 96% overall accuracy for JUST-SL and an impressive 99% for KArSL. These results demonstrate the system’s superior accuracy compared to previous research, particularly for recognizing full Arabic words, thereby significantly enhancing communication accessibility for the deaf and hearing-impaired community. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 747 KB  
Article
Multi-Domain Fake News Detection Based on Multi-View Fusion Attention
by Guoning Gan, Zhisong Qin, Jiaqi Qin and Ke Lin
Electronics 2026, 15(8), 1733; https://doi.org/10.3390/electronics15081733 - 20 Apr 2026
Viewed by 193
Abstract
The widespread dissemination of fake news across different domains exerts a negative impact on social order. Current fake news detection models face two major challenges. First, the issue of domain shift constrains the generalization capability of models in cross-domain scenarios. Second, general neural [...] Read more.
The widespread dissemination of fake news across different domains exerts a negative impact on social order. Current fake news detection models face two major challenges. First, the issue of domain shift constrains the generalization capability of models in cross-domain scenarios. Second, general neural networks struggle to extract features between distant words in text, resulting in poor quality of original features and adversely affecting the final detection outcomes. In response to the aforementioned issues, this paper proposes a multi-domain fake news detection framework based on multi-view hybrid attention enhancement. Firstly, superior original feature extraction is achieved through Recurrent Convolutional Neural Networks (RCNN) and Bidirectional Long Short-Term Memory (BiLSTM). Secondly, a hybrid attention mechanism is established between features and domains across multiple views—including news semantics, sentiment, and style—thereby forming domain-specific memory. This enables the model to achieve more precise classification of news within specific, subdivided domains. Finally, experiments conducted on the public dataset Weibo21 demonstrate that the proposed method attains F1 scores of 93.26% and 85.31% on Chinese and English datasets. Full article
Show Figures

Figure 1

33 pages, 503 KB  
Review
Kolmogorov–Arnold Networks for Sensor Data Processing: A Comprehensive Survey of Architectures, Applications, and Open Challenges
by Antonio M. Martínez-Heredia and Andrés Ortiz
Sensors 2026, 26(8), 2515; https://doi.org/10.3390/s26082515 - 19 Apr 2026
Viewed by 217
Abstract
Kolmogorov–Arnold Networks (KANs) have recently gained increasing attention as an alternative to conventional neural architectures, mainly because they replace fixed activation functions with learnable univariate mappings defined along network edges. This design not only increases modeling flexibility but also makes it easier to [...] Read more.
Kolmogorov–Arnold Networks (KANs) have recently gained increasing attention as an alternative to conventional neural architectures, mainly because they replace fixed activation functions with learnable univariate mappings defined along network edges. This design not only increases modeling flexibility but also makes it easier to interpret how inputs are transformed within the network while maintaining parameter efficiency. KANs are particularly well suited for sensor-driven systems where transparency, robustness, and computational constraints are critical. This study provides a survey of KAN-based approaches for processing sensor data. A literature review conducted from 2024 to 2026 examined the deployment of KAN models in industrial and mechanical sensing, medical and biomedical sensing, and remote sensing and environmental monitoring, utilizing a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-based methodology. We first revisit the theoretical foundations of KANs and their main architectural variants, including spline-based, polynomial-based, monotonic, and hybrid formulations, to structure the discussion. From a practical standpoint, we then examine how KAN modules are integrated into modern deep learning pipelines, such as convolutional, recurrent, transformer-based, graph-based, and physics-informed architectures. KAN-based models demonstrate comparable predictive performance as conventional machine learning models, while having fewer parameters and more interpretable representations. Several limitations persist, including computational overhead, sensitivity to noisy signals, and resource-constrained device deployment challenges. Real-world sensor systems encounter significant challenges in adopting KAN-based models, including scalability in large-scale sensor networks, integration with hardware architectures, automated model development, resilience to out-of-distribution conditions, and the need for standardized evaluation metrics. Collectively, these observations provide a clearer understanding of the current and potential limitations of KAN-based models, offering practical guidance on the development of interpretable and efficient learning systems for future sensor equipment applications. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

19 pages, 3086 KB  
Article
Enhanced Neural Real-Time Digital Twin for Electrical Drives
by Marco di Benedetto, Vincenzo Randazzo, Alessandro Lidozzi, Angelo Accetta, Giorgia Ghione, Luca Solero, Giansalvo Cirrincione and Eros Gian Alessandro Pasero
Appl. Sci. 2026, 16(8), 3955; https://doi.org/10.3390/app16083955 - 18 Apr 2026
Viewed by 185
Abstract
This paper presents a real-time digital twin (DT) of the power conversion system used in offshore wind applications. The proposed DT is exploited to identify key electrical parameters of both the permanent magnet synchronous generator (PMSG) and the three-phase boost rectifier and has [...] Read more.
This paper presents a real-time digital twin (DT) of the power conversion system used in offshore wind applications. The proposed DT is exploited to identify key electrical parameters of both the permanent magnet synchronous generator (PMSG) and the three-phase boost rectifier and has been developed with a Condition Monitoring (CM)-oriented approach. A Gated Recurrent Unit (GRU) neural network is adopted as a real-time digital model (RTDM) to estimate online the PMSG phase resistance and synchronous inductance, as well as the DC-link capacitance at the rectifier output. The network is trained in MATLAB using data generated by a Typhoon HIL 606 emulator, covering both balanced and unbalanced operating conditions and a wide range of parameter variations. The trained GRU is then deployed on the control board and implemented in LabVIEW Real-Time for embedded execution. Experimental tests on a PMSG-based generating unit confirm the effectiveness of the proposed RTDM, achieving low root-mean-square and mean percentage errors in parameter estimation. The results demonstrate that the enhanced neural real-time DT is a promising tool for condition monitoring and predictive maintenance of power conversion systems in offshore wind applications. Full article
(This article belongs to the Special Issue Digital Twin and IoT, 2nd Edition)
Show Figures

Figure 1

Back to TopTop