Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,053)

Search Parameters:
Keywords = RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2386 KB  
Article
Comparative Evaluation of Deep Learning Models for Respiratory Rate Estimation Using PPG-Derived Numerical Features
by Syed Mahedi Hasan, Mercy Golda Sam Raj and Kunal Mitra
Electronics 2026, 15(5), 1108; https://doi.org/10.3390/electronics15051108 (registering DOI) - 7 Mar 2026
Abstract
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In [...] Read more.
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In this study, numerical physiological features derived from PPG data were used to comparatively evaluate multiple deep learning models for respiratory rate estimation. Fixed-length sliding windows were constructed from the dataset and used to train five neural network architectures: a Deep Feedforward Neural Network (DFNN), unidirectional and bidirectional Recurrent Neural Networks (RNN, Bi-RNN), and unidirectional and bidirectional Long Short-Term Memory networks (LSTM, Bi-LSTM). Model performance was assessed using mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2), and computational runtime. Results indicate that models incorporating temporal dependencies outperform the static feedforward baseline, achieving MAE values as low as 0.521 breaths/min, making them competitive with or lower than previously reported PPG-based approaches. These findings highlight the effectiveness of temporal deep learning models for respiratory rate estimation from PPG-derived numerical features and provide insight into accuracy–efficiency trade-offs relevant to real-time monitoring applications. Full article
Show Figures

Figure 1

19 pages, 2002 KB  
Article
Application of Machine Learning Approach to Classify Human Activity Level Based on Lifelog Data
by Si-Hwa Jeong, Woomin Nam and Keon Chul Park
Sensors 2026, 26(5), 1612; https://doi.org/10.3390/s26051612 - 4 Mar 2026
Viewed by 146
Abstract
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using [...] Read more.
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using the lifelog data, the machine learning models were developed to classify the physical activity status of patients into five levels. Three types of wearable data with heart rate, step count, and calorie consumption were pre-processed as integrated data in time series. A total of 80% of the integrated data was used as the training dataset, and the remaining 20% was used as the test dataset. Sixteen algorithms were evaluated, including 12 traditional machine learning models (SVM, KNN, RF, etc.) and 4 deep learning models (CNN, RNN, etc.), and cross-validation was performed by dividing the training dataset into 5 folds. By changing the parameters required for training, the models with optimal parameters were derived. The performance of the final models with the new patient lifelog data was evaluated, and it was shown that the classification for human activity level based on heart rate and step count can be performed with high accuracy. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

23 pages, 760 KB  
Article
Trajectory Data Publishing Scheme Based on Transformer Decoder and Differential Privacy
by Haiyong Wang and Wei Huang
ISPRS Int. J. Geo-Inf. 2026, 15(3), 106; https://doi.org/10.3390/ijgi15030106 - 3 Mar 2026
Viewed by 111
Abstract
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods [...] Read more.
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods in modeling complex, long-term spatiotemporal dependencies. To address this, this paper proposes a trajectory data publishing scheme combining a Transformer decoder with differential privacy. Unlike traditional single-layer approaches, the proposed method establishes a systematic generation–generalization framework. First, a Transformer decoder is integrated into a Generative Adversarial Network (GAN). This architecture mitigates the gradient vanishing issues common in RNN-based models, generating high-fidelity synthetic trajectories that capture long-range correlations while decoupling them from sensitive source data. Second, to provide rigorous privacy guarantees, a clustering-based generalization strategy is implemented, utilizing Exponential and Laplace mechanisms to ensure ϵ-differential privacy. Experiments on the Geolife and Foursquare NYC datasets demonstrate that the scheme significantly outperforms leading baselines, achieving a superior trade-off between privacy protection and data utility. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

23 pages, 13416 KB  
Article
An Adaptive Ensemble Model Based on Deep Reinforcement Learning for the Prediction of Step-like Landslide Displacement
by Tengfei Gu, Lei Huang, Shunyao Tian, Zhichao Zhang, Huan Zhang and Yanke Zhang
Remote Sens. 2026, 18(5), 761; https://doi.org/10.3390/rs18050761 - 3 Mar 2026
Viewed by 165
Abstract
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation [...] Read more.
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation stages, especially during acceleration. To address these challenges, we propose an interpretable deep reinforcement learning-based adaptive ensemble (DRL-AE) framework. The method employs Seasonal and Trend decomposition using Loess to separate cumulative displacement into trend and periodic components. Trend and periodic sequences are predicted using double exponential smoothing and three RNN variants, respectively. An improved Convolutional Block Attention Module (ICBAM) enhances periodic feature extraction and provides temporal–spatial interpretability. The Deep Deterministic Policy Gradient algorithm adaptively integrates multi-model predictions in response to evolving environmental conditions. To validate the DRL-AE, a case study is conducted on the Baijiabao landslide in Zigui County, China. The results indicate that the DRL-AE substantially enhances prediction accuracy. For periodic displacement, it reduces MAE by 10.02% and RMSE by 6.65%, and increases R2 by 4.27% compared with the ICBAM-GRU model. The results also confirm the effectiveness of ICBAM in feature extraction, and the generated heatmaps provide intuitive interpretability of the relevant triggering factors. Full article
Show Figures

Figure 1

18 pages, 1168 KB  
Article
A Hybrid Deep Learning Model for Predicting Tuna Distribution Around Drifting Fish Aggregating Devices
by Bo Song, Jian Liu, Tianjiao Zhang and Quanjin Chen
Sustainability 2026, 18(5), 2406; https://doi.org/10.3390/su18052406 - 2 Mar 2026
Viewed by 105
Abstract
Accurate prediction of tuna distribution is essential for sustainable fisheries management. This study develops a two-stage hybrid model combining Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Random Forest (RF) to predict tuna distribution around drifting fish aggregating devices (DFAD) in the [...] Read more.
Accurate prediction of tuna distribution is essential for sustainable fisheries management. This study develops a two-stage hybrid model combining Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Random Forest (RF) to predict tuna distribution around drifting fish aggregating devices (DFAD) in the Western and Central Pacific Ocean (WCPO). Echo-sounder buoy data from DFAD were aggregated into 2° × 2° grid cells and matched with oceanographic variables from the Copernicus Marine Service. Random Forest-based variable importance analysis identified primary productivity (27%), chlorophyll-a (22%), and dissolved oxygen (18%) as the three dominant environmental drivers. The CNN-RNN component extracts spatiotemporal features from multi-layer ocean data, while the RF classifier performs binary classification of tuna aggregation zones (high-yield vs. low-yield). All five models (Decision Tree, RF, CNN, Transformer, and CNN-RNN-RF) were evaluated on 557 samples using 5-fold stratified cross-validation, with each fold further split 80:20 for training and validation. The proposed CNN-RNN-RF model achieved the highest performance with an AUC of 0.830, accuracy of 82.6%, and F1-scores of 86.3% (high-yield) and 76.2% (low-yield), outperforming the best baseline model (RF: AUC 0.761, accuracy 75.4%). Predicted high-yield zones showed strong consistency with fishing log records, demonstrating the potential of integrating echo-sounder data with hybrid deep learning for data-driven tuna fisheries management. Full article
Show Figures

Figure 1

22 pages, 13052 KB  
Article
Enhanced Migratory Biological Echo Extrapolation from Weather Radar Using ISA-LSTM
by Dou Meng, Yunping Liu, Dongli Wu, Zhiliang Deng, Yifu Chen and Chunzhi Wang
Atmosphere 2026, 17(3), 257; https://doi.org/10.3390/atmos17030257 - 28 Feb 2026
Viewed by 159
Abstract
Weather radar provides continuous, large-scale observations of aerial biological activity. However, biological echoes typically exhibit weak signals, sparse distributions, and non-stationary abrupt variations, causing existing extrapolation models to suffer from over-smoothing and loss of detail and making it difficult to capture their short-term [...] Read more.
Weather radar provides continuous, large-scale observations of aerial biological activity. However, biological echoes typically exhibit weak signals, sparse distributions, and non-stationary abrupt variations, causing existing extrapolation models to suffer from over-smoothing and loss of detail and making it difficult to capture their short-term evolution effectively. To address this issue, we propose an Integrated Self-Attention Long Short-Term Memory (ISA-LSTM) model that integrates a self-attention mechanism within the Predictive Recurrent Neural Network (PredRNN) framework. Coupled convolutional modules are introduced to enhance feature interactions between inputs and hidden states, while a spatiotemporal self-attention mechanism improves long-term dependency modeling and local detail preservation. Experiments conducted on 6000 biological echo samples from three weather radars in the Poyang Lake region demonstrate that the proposed model achieves superior extrapolation accuracy and stability compared with existing methods, maintaining a low false-alarm rate for lead times of up to 50 min. The results suggest that ISA-LSTM offers an effective deep learning approach for biological echo extrapolation, with applications in aviation safety and agricultural pest and disease early warning. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

29 pages, 1954 KB  
Review
A Review on Bathymetric Inversion Research Based on Deep Learning Models and Remote Sensing Images
by Delong Liu, Yufeng Shi and Hong Fang
Remote Sens. 2026, 18(5), 720; https://doi.org/10.3390/rs18050720 - 27 Feb 2026
Viewed by 371
Abstract
High-precision inversion of shallow-water depth is crucial to marine resource development, ecological protection, and national defense security. Traditional acoustic detection, LiDAR, and empirical models are limited by high cost, low efficiency, or water quality dependence, struggling to meet people’s growing demand for shallow-water [...] Read more.
High-precision inversion of shallow-water depth is crucial to marine resource development, ecological protection, and national defense security. Traditional acoustic detection, LiDAR, and empirical models are limited by high cost, low efficiency, or water quality dependence, struggling to meet people’s growing demand for shallow-water depth. With the rapid development of theories and technologies such as remote sensing information, computer science, and artificial intelligence, bathymetric inversion based on remote sensing images and deep learning models has become a research hotspot. In this study, journal articles and conference papers were searched in the Web of Science (WOS) and Google Scholar databases using keywords such as “remote sensing image”, “bathymetry”, and “deep learning model”. The publication time of the papers ranges from January 2021 to September 2025. A total of 309 relevant studies were retrieved and, after screening and quality control, 132 core studies were finally selected as the research objects for this review. These studies were classified according to deep learning models, including CNN, U-Net, MLP, and RNN. The study analyzed and summarized the characteristics of different deep learning models in bathymetric inversion, as well as their data source selection, inversion accuracy, and limitations. Additionally, the future development trends were discussed in combination with the latest research results. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data for Oceanography (2nd Edition))
Show Figures

Figure 1

46 pages, 7510 KB  
Article
Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification
by Hongchu Yu, Xiaohan Xu, Zheng Guo, Tianming Wei and Lei Xu
J. Mar. Sci. Eng. 2026, 14(5), 448; https://doi.org/10.3390/jmse14050448 - 27 Feb 2026
Viewed by 326
Abstract
With the expansion of water transportation networks and increasing traffic intensity, maritime accidents have become frequent, posing significant threats to safety and property. This study presents a knowledge graph-driven framework for maritime accident analysis, addressing the limitations of traditional risk analysis methods in [...] Read more.
With the expansion of water transportation networks and increasing traffic intensity, maritime accidents have become frequent, posing significant threats to safety and property. This study presents a knowledge graph-driven framework for maritime accident analysis, addressing the limitations of traditional risk analysis methods in extracting and organizing unstructured accident data. First, a standardized ontology for ship collision accidents is developed, defining core concepts such as event, spatiotemporal behavior, causation, consequence, responsibility, and decision-making. Advanced natural language processing models, including a lexicon-enhanced LEBERT-BiLSTM-CRF and a K-BERT-BiLSTM-CRF incorporating ship collision knowledge triplets, are proposed for named entity recognition and relation extraction, with F1-score improvements of 6.7% and 1.2%, respectively. The constructed accident knowledge graph integrates heterogeneous data, enabling semantic organization and efficient retrieval. Leveraging graph topological features, an accident severity classification model is established, where a graph-feature-driven LSTM-RNN demonstrates robust performance, especially with imbalanced data. Comparative experiments show the superiority of this approach over conventional models such as XGBoost and random forest. Overall, this research demonstrates that knowledge graph-driven methods can significantly enhance maritime accident knowledge extraction and severity classification, providing strong information support and methodological advances for intelligent accident management and prevention. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Graphical abstract

27 pages, 1589 KB  
Review
De Novo Structure Prediction from Tandem Mass Spectra: Algorithms, Benchmarks, and Limitations
by Mark Yu. Schneider, Daniil D. Kholmanskikh, Kirill Ya. Romanov, Elena A. Perekina, Sergei A. Nikolenko, Ruslan Yu. Lukin and Ivan V. Golov
Molecules 2026, 31(5), 769; https://doi.org/10.3390/molecules31050769 - 25 Feb 2026
Viewed by 384
Abstract
The identification of unknown molecules from analytical data remains a fundamental challenge in chemistry, with critical implications for drug discovery, metabolomics, and natural product research. While tandem mass spectrometry provides rich structural fingerprints, most spectra are absent from reference libraries, spurring the development [...] Read more.
The identification of unknown molecules from analytical data remains a fundamental challenge in chemistry, with critical implications for drug discovery, metabolomics, and natural product research. While tandem mass spectrometry provides rich structural fingerprints, most spectra are absent from reference libraries, spurring the development of de novo generative models. However, their true accuracy has been difficult to assess. Our critical analysis reveals that state-of-the-art models achieve only 4.1% top-10 accuracy on rigorously leakage-controlled benchmarks like MassSpecGym. This sobering figure stands in stark contrast to earlier, overly optimistic reports, a discrepancy we attribute to pervasive data leakage in naive data splits. This review traces the field’s rapid evolution through three architectural eras: from fingerprint-conditioned RNN pipelines to end-to-end sequence models and, most recently, to graph-native diffusion under molecular-formula constraints. We demonstrate that explicitly conditioning generative models on a molecular formula significantly improves exact-match accuracy compared to unconstrained baselines. Crucially, our analysis distinguishes between two experimentally relevant paradigms: formula-conditioned generation for true unknown discovery and scaffold-based generation for hypothesis-driven research. While the latter shows high potential with oracle scaffolds, its performance drastically drops with predicted ones, revealing a critical bottleneck. To build the next generation of reliable tools, we propose a clear roadmap centered on standardized, leakage-aware benchmarking and transparent reporting. Full article
(This article belongs to the Special Issue Advances in Computational Spectroscopy, 2nd Edition)
Show Figures

Graphical abstract

30 pages, 4409 KB  
Article
Divergent Trajectories of the Water–Energy–Food Nexus in the Yangtze River Economic Belt
by Yiyang Li, Hongrui Wang, Li Zhang, Hongchong Wang, Yuhan Ding and Xinlong Du
Water 2026, 18(5), 538; https://doi.org/10.3390/w18050538 - 25 Feb 2026
Viewed by 313
Abstract
Unraveling the coupling mechanisms of the Water–Energy–Food (WEF) nexus is critical for regional synergistic security and high-quality development. Using an integrated “relationship identification, equation construction, and scenario prediction” framework, this study characterized the spatiotemporal evolution of WEF interactions in the Yangtze River Economic [...] Read more.
Unraveling the coupling mechanisms of the Water–Energy–Food (WEF) nexus is critical for regional synergistic security and high-quality development. Using an integrated “relationship identification, equation construction, and scenario prediction” framework, this study characterized the spatiotemporal evolution of WEF interactions in the Yangtze River Economic Belt. Under this framework, a Granger causality test coupled with a SHAP interpretability model was first employed to quantify the causal strength among nexus elements, followed by a Bayesian Vector Autoregression model integrated with a hybrid Recurrent Neural Network (RNN) and System Dynamics (SD) approach to simulate evolutionary trajectories from 2024 to 2035. Results showed that: (1) The nexus mechanisms exhibited significant spatial duality. Upstream egg production drove a high virtual water footprint, while inland seafood consumption imposed a non-linear energy premium due to cold-chain dependency. In Shanghai, a strong diesel–groundwater coupling revealed a trade-off between energy input and underground safety. (2) Localized feed cultivation was the core driver for upstream water pressure, whereas logistics intensity was the dominant factor for energy–water interactions in urbanized regions. (3) From 2024 to 2035, the nexus structure will undergo bidirectional divergence. Ecological water demand in the midstream is projected to surge by over 130%, and Anhui’s milk production is forecast to more than double from 107.77 to 225.7 million tons. The findings provide scientific support for coordinating ecological conservation and high-quality development. Full article
(This article belongs to the Special Issue Advanced Perspectives on the Water–Energy–Food Nexus)
Show Figures

Figure 1

25 pages, 896 KB  
Article
Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning
by Negasa Berhanu Fite, Getachew Mamo Wegari and Heidi Steendam
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211 - 23 Feb 2026
Viewed by 506
Abstract
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received [...] Read more.
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems. Full article
Show Figures

Figure 1

25 pages, 5640 KB  
Article
Estimation of Winter Wheat SPAD Values by Integrating Spectral Feature Optimization and Machine Learning Algorithms
by Yufei Wang, Xuebing Wang, Jiang Sun, Zeyang Wen, Haoyong Wu, Lujie Xiao, Meichen Feng, Yu Zhao and Xianjie Gao
Agronomy 2026, 16(4), 489; https://doi.org/10.3390/agronomy16040489 - 22 Feb 2026
Viewed by 311
Abstract
The chlorophyll content of plant leaves measured by the soil plant analysis development (SPAD) is an important indicator for measuring crop growth status and irrigation effect. The rapid, non-destructive and efficient estimation of crop SPAD values is of great significance to the field [...] Read more.
The chlorophyll content of plant leaves measured by the soil plant analysis development (SPAD) is an important indicator for measuring crop growth status and irrigation effect. The rapid, non-destructive and efficient estimation of crop SPAD values is of great significance to the field management of crops. In this study, the canopy hyperspectral reflectance and SPAD values of winter wheat were obtained, and the spectral curve was changed through four spectral processing methods, including first-order differential (FD), second-order differential (SD), multivariate scattering correction (MSC), and Savitzky–Golay smoothing (SG) to improve the correlation between canopy spectral reflectance and SPAD. Furthermore, to investigate and evaluate the performance of various vegetation indices (VIs) in estimating SPAD values for winter wheat, existing published indices were optimized using random band combinations derived from multiple canopy spectral transformations. The optimized vegetation index was used as the input variable of the model, and six machine learning algorithms, including random forest (RF), long short-term memory network (LSTM), multilayer perceptron (MLP), deep recurrent neural network (Deep-RNN), gated recurrent unit (GRU), and convolutional neural network (CNN), were used to construct the winter wheat SPAD values estimation model, and the model was verified. The experimental results demonstrate that, when utilizing an equivalent number of optimized vegetation indices as input, the GRU-based model achieves higher estimation accuracy compared to other models. Specifically, the coefficient of determination (R2) is improved by 0.12 compared to the RF model, by 0.03 compared to the LSTM model, by 0.12 compared to the MLP model, by 0.02 compared to the Deep-RNN model, and by 0.02 compared to the CNN model. At the same time, the GRU model also has a lower root mean square error (RMSE) and relative error (RE) of 7.37 and 24.90%, respectively. This study provides valuable hyperspectral remote sensing technology support for the implementation of winter wheat SPAD values estimation in the field. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

23 pages, 1294 KB  
Article
Event-Driven Spatiotemporal Computing for Robust Flight Arrival Time Prediction: A Probabilistic Spiking Transformer Approach
by Quanquan Chen and Meilong Le
Aerospace 2026, 13(2), 203; https://doi.org/10.3390/aerospace13020203 - 22 Feb 2026
Viewed by 170
Abstract
Precise Estimated Time of Arrival (ETA) prediction in Terminal Maneuvering Areas (TMA) constitutes a prerequisite for efficient arrival sequencing and airspace capacity management. While data-driven approaches outperform kinematic models, conventional Recurrent Neural Networks (RNNs) exhibit limitations in modeling complex multi-aircraft spatial interactions and [...] Read more.
Precise Estimated Time of Arrival (ETA) prediction in Terminal Maneuvering Areas (TMA) constitutes a prerequisite for efficient arrival sequencing and airspace capacity management. While data-driven approaches outperform kinematic models, conventional Recurrent Neural Networks (RNNs) exhibit limitations in modeling complex multi-aircraft spatial interactions and lack the capability to quantify predictive uncertainty. Conversely, Spiking Neural Networks (SNNs) enable energy-efficient event-driven computation, yet their applicability to continuous trajectory regression is hindered by “input starvation,” where normalized state vectors fail to induce sufficient neural firing rates. This study proposes a Probabilistic Spiking Transformer (PST) architecture to integrate neuromorphic sparsity with global attention mechanisms. An Adaptive Spiking Temporal Encoding mechanism incorporating learnable linear projections is introduced to resolve the regression-spiking incompatibility, facilitating the autonomous mapping of continuous trajectory dynamics into sparse spike trains without heuristic scaling. Concurrently, a Distance-Biased Multi-Aircraft Cross-Attention (MACA) module models air traffic conflicts by weighting spatial interactions according to physical proximity, thereby embedding separation constraints into the feature extraction process. Evaluation on large-scale real-world ADS-B datasets demonstrates that the PST yields a Mean Absolute Error (MAE) of 49.27 s, representing a 60% error reduction relative to standard LSTM baselines. Furthermore, the model generates well-calibrated probabilistic distributions (Prediction Interval Coverage Probability > 94%), offering quantifiable uncertainty metrics for risk-based decision support while ensuring real-time inference suitable for operational deployment. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

30 pages, 3711 KB  
Article
An RNN-Enhanced Diverse Curriculum-Driven Learning Algorithm Based on Deep Reinforcement Learning for POMDPs with Limited Experience
by Ke Li, Kun Zhang, Ziqi Wei, Haiyin Piao, Binlin Yuan, Boxuan Wang and Jiangbo Cheng
Drones 2026, 10(2), 142; https://doi.org/10.3390/drones10020142 - 17 Feb 2026
Viewed by 294
Abstract
Autonomous flight is a critical capability for unmanned aerial vehicles (UAVs), enabling applications in wildlife and plant protection, infrastructure inspection, search and rescue, and other complex missions. Although some learning-based methods have achieved considerable progress, traditional algorithms still struggle with real-world challenges, due [...] Read more.
Autonomous flight is a critical capability for unmanned aerial vehicles (UAVs), enabling applications in wildlife and plant protection, infrastructure inspection, search and rescue, and other complex missions. Although some learning-based methods have achieved considerable progress, traditional algorithms still struggle with real-world challenges, due to the partially observable nature of environments and limited experience regarding the properties of dynamic unknown environments where threats and targets are movable and unpredictable. To address these difficulties, it is necessary to achieve autonomous guidance for UAVs performing long-range missions in dynamic environments (LRGDEs), and to develop a novel end-to-end algorithm that can overcome partial observability under limited state transitions. In this paper, we propose an RNN-enhanced Diverse Curriculum-driven Learning Algorithm (REDCRL) based on deep reinforcement learning. We modify the structure of traditional actor–critic networks and introduce Bi-LSTM into policy networks (referred to as Bi-LSTM-modified Policy Networks (BLPNs)) to alleviate observation incompleteness. Furthermore, to fully exploit the potential value of data and mitigate the problem of insufficient samples, we develop an Adaptive Multi-Feature Evaluation Experience Replay (AMFER) method to reshape the process of experience replay buffer construction and sampling. In addition, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm is adopted to optimize UAV-maneuver decision policies. Compared with traditional algorithms, the proposed algorithm can accelerate policy convergence and improve the performance of the trained policy. Full article
(This article belongs to the Special Issue Advances in AI Large Models for Unmanned Aerial Vehicles)
Show Figures

Figure 1

23 pages, 623 KB  
Article
Radiomics-Driven Hybrid Deep Learning for MRI-Based Prediction of Glioma Grade and 1p/19q Codeletion
by Abdullah Bin Sawad and Muhammad Binsawad
Tomography 2026, 12(2), 25; https://doi.org/10.3390/tomography12020025 - 15 Feb 2026
Viewed by 253
Abstract
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and [...] Read more.
Background: Correct preoperative evaluation of glioma grade and molecular profile is a prerequisite for tailored treatment strategies. Specifically, the 1p/19q codeletion status represents a major prognostic and therapeutic marker in low-grade gliomas (LGGs). Nevertheless, its assessment is presently performed through invasive histopathological and genetic studies, thus underlining the need for non-invasive alternative approaches. Methods: We introduce a non-invasive radiomics framework that combines quantitative MRI features with sophisticated ML and DL approaches for glioma grading and 1p/19q codeletion status prediction. High-dimensional radiomic features characterizing tumor geometry, intensity, and texture were derived from preoperative MRI-based tumor delineations. Features were normalized and optimized using correlation-based feature selection. Several traditional ML classifiers were compared and contrasted with DL models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and a CNN-Long Short-Term Memory (LSTM) hybrid model tailored to exploit both spatial feature hierarchies and feature correlations. Model validation was conducted using five-fold cross-validation and an independent test dataset, with accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) metrics. Results: Among all the models tested, the hybrid CNN-LSTM model performed the best, with an accuracy of 88.1% and an AUC of 0.93, outperforming conventional ML approaches and single-model DL architectures. Explainability analysis showed that the radiomic features of tumor heterogeneity and morphology had the most prominent impact on model performance. Conclusions: These findings indicate that the combination of radiomic features with hybrid DL models is capable of making non-invasive predictions of glioma grade and 1p/19q codeletion status. The new computational model has the potential to be used as a supplementary approach in precision neuro-oncology. Full article
Show Figures

Figure 1

Back to TopTop