Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,057)

Search Parameters:
Keywords = RNN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 10593 KB  
Review
Review of Polymer Drug Therapy for Cancer Driven by Artificial Intelligence
by Jie Zheng and Yuanlv Ye
Polymers 2026, 18(6), 677; https://doi.org/10.3390/polym18060677 (registering DOI) - 11 Mar 2026
Abstract
This review systematically evaluates the interdisciplinary convergence of artificial intelligence (AI) and polymer science in cancer therapy. Beyond mere description, we provide an integrated framework spanning synthetic optimization, biocompatibility prediction, and the design of tumor microenvironment (TME)-responsive carriers. We highlight how AI algorithms [...] Read more.
This review systematically evaluates the interdisciplinary convergence of artificial intelligence (AI) and polymer science in cancer therapy. Beyond mere description, we provide an integrated framework spanning synthetic optimization, biocompatibility prediction, and the design of tumor microenvironment (TME)-responsive carriers. We highlight how AI algorithms (ML, DL, and RNNs) transform traditional trial-and-error methods into a data-driven paradigm, enabling precise spatiotemporal drug release and individualized pharmacokinetic modeling. Crucially, this work addresses the critical gap between computational modeling and clinical realization by providing a balanced critical analysis of current bottlenecks, including the “small data” challenge, publication bias, and regulatory hurdles. We conclude with a roadmap for AI-guided precision oncology, shifting the focus from predictive accuracy to mechanistic interpretability and prospective in vivo validation. Full article
(This article belongs to the Section Artificial Intelligence in Polymer Science)
Show Figures

Graphical abstract

13 pages, 2079 KB  
Article
Trend Prediction of Distribution Network Fault Symptoms Based on XLSTM-Informer Fusion Model
by Zhen Chen, Lin Gao and Yuanming Cheng
Energies 2026, 19(6), 1389; https://doi.org/10.3390/en19061389 - 10 Mar 2026
Abstract
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches [...] Read more.
Accurate prediction of distribution network operating states is essential for implementing proactive fault warning systems. However, with the high penetration of distributed energy resources, measurement data exhibit strong nonlinearity and multi-scale temporal characteristics, posing significant challenges to existing prediction methods. Current mainstream approaches face a critical dilemma: traditional recurrent neural network (RNN) models (e.g., LSTM) suffer from vanishing gradients and memory bottlenecks in long-sequence forecasting, making it difficult to capture long-term evolutionary trends. In contrast, while standard Transformer models excel at global modeling, their smoothing effect renders them insensitive to subtle transient abrupt changes such as voltage sags, and they incur high computational complexity. To address the dual challenges of “difficulty in capturing transient abrupt changes” and “inability to simultaneously handle long-term trends,” this paper proposes a fault precursor trend prediction model that integrates Extended Long Short-Term Memory (XLSTM) with Informer, termed XLSTM-Informer. To tackle the challenge of extracting transient features, an XLSTM-based local encoder is constructed. By replacing the conventional Sigmoid activation with an improved exponential gating mechanism, the model achieves significantly enhanced sensitivity to instantaneous fluctuations in voltage and current. Additionally, a matrix memory structure is introduced to effectively mitigate information forgetting issues during long-sequence training. To overcome the challenge of modeling long-term dependencies, Informer is employed as the global decoder. Leveraging its ProbSparse sparse self-attention mechanism, the model substantially reduces computational complexity while accurately capturing long-range temporal dependencies. Experimental results on a real-world distribution network dataset demonstrate that the proposed model achieves substantially lower Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE) compared to standalone CNN, LSTM, and other baseline models, as well as conventional LSTM–Informer hybrid approaches. Particularly under extreme operating conditions—such as sustained high summer loads and winter heating peak loads—the model successfully overcomes the trade-off limitations of traditional methods, enabling simultaneous and accurate prediction of both local precursors and global trends. This provides a reliable technical foundation for proactive warning systems in distribution networks. Full article
Show Figures

Figure 1

16 pages, 2031 KB  
Article
A Comparative Study of Transformer-Based and Classical Models for Financial Time-Series Forecasting
by Ting Liu
J. Risk Financial Manag. 2026, 19(3), 203; https://doi.org/10.3390/jrfm19030203 - 9 Mar 2026
Viewed by 40
Abstract
This study compares classical and deep learning models (ARIMA, Random Forest, RNN, LSTM, CNN, and Transformer) for forecasting one-day-ahead log returns rt+1=ln(Pt+1/Pt) using daily data for six U.S.-listed equities [...] Read more.
This study compares classical and deep learning models (ARIMA, Random Forest, RNN, LSTM, CNN, and Transformer) for forecasting one-day-ahead log returns rt+1=ln(Pt+1/Pt) using daily data for six U.S.-listed equities (NVDA, TSLA, SMCI, GOOGL, PYPL, SNAP) from 2014 to 2024. Predictors include lagged price/return information, lagged macroeconomic variables (CPI, policy rate, GDP) to reflect information availability, and technical indicators (SMA, RSI, MACD) computed using rolling windows ending at day t to avoid look-ahead bias. Performance is evaluated in a walk-forward out-of-sample design, with hyperparameters selected using time-series validation within each training window. Empirically, results are asset-dependent: ARIMA and Random Forest remain strong baselines; deep learning models show asset-dependent performance, with LSTM occasionally competitive in some settings, and the Transformer competitive but not uniformly dominant. For context, this study also reports a rule-based SMA(10/50) crossover benchmark evaluated net of transaction costs. Overall, the findings suggest that predictive signals in daily equity returns, when present, are modest and must be assessed under strict leakage controls and realistic evaluation protocols. Full article
Show Figures

Figure 1

34 pages, 8947 KB  
Article
Lightweight Evidential Time Series Imputation Method for Bridge Structural Health Monitoring
by Die Liu, Jianxi Yang, Lihua Chen, Tingjun Xu, Youjia Zhang, Lei Zhou and Jingyuan Shen
Buildings 2026, 16(5), 1076; https://doi.org/10.3390/buildings16051076 - 9 Mar 2026
Viewed by 120
Abstract
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning [...] Read more.
Long-term data loss resulting from sensor malfunctions, communication interruptions, and other factors in Structural Health Monitoring (SHM) significantly undermines the reliability of damage identification and safety assessment. Existing methods—ranging from statistical approaches and low-rank matrix completion to traditional machine learning and deep learning imputation techniques—often suffer from either limited accuracy or excessive model size and slow inference, making deployment in resource-constrained scenarios difficult. To address these challenges, this paper proposes TEFN–Imputation, a lightweight and efficient time-series imputation model. This model utilizes observation-driven non-stationary normalization to mitigate the impact of time-varying characteristics and dimensional discrepancies. It employs linear projection for temporal length alignment and constructs BPA-style mass representations from dual perspectives of time and channel. Furthermore, it replaces strict Dempster–Shafer belief combination with an expectation-based evidential aggregation (readout), thereby significantly reducing computational overhead while enabling uncertainty-aware evidential indicators for interpretation rather than claiming a direct accuracy gain from uncertainty modeling. The observed accuracy and robustness improvements are primarily attributed to the normalization and dual temporal–channel modeling design under the same lightweight readout. Systematic experiments on two real-world bridge monitoring datasets, Z24 and Hell Bridge, demonstrate that TEFN consistently maintains low Mean Absolute Error (MAE) and minimal volatility across various combinations of training and testing missing rates, exhibiting high robustness against variations in missing rates and train–test mismatches. Concurrently, compared to RNN and large-scale Transformer baselines, TEFN reduces parameter count and CPU inference time by one to two orders of magnitude. Thus, it achieves a superior trade-off among accuracy, efficiency, and model scale, making it highly suitable for online SHM and imputation tasks in practical engineering applications. Across the settings on Z24, TEFN achieves a mean MAE of 0.218 with a standard deviation of 0.002, while using only 0.02 MB parameters and 2.73 ms per batch CPU inference. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

17 pages, 2386 KB  
Article
Comparative Evaluation of Deep Learning Models for Respiratory Rate Estimation Using PPG-Derived Numerical Features
by Syed Mahedi Hasan, Mercy Golda Sam Raj and Kunal Mitra
Electronics 2026, 15(5), 1108; https://doi.org/10.3390/electronics15051108 - 7 Mar 2026
Viewed by 158
Abstract
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In [...] Read more.
Respiratory rate (RR) is a critical vital sign for the early detection of hypoxia and respiratory deterioration, yet its continuous monitoring remains challenging in clinical environments. Photoplethysmography (PPG) provides a non-invasive source of physiological information from which respiratory dynamics can be inferred. In this study, numerical physiological features derived from PPG data were used to comparatively evaluate multiple deep learning models for respiratory rate estimation. Fixed-length sliding windows were constructed from the dataset and used to train five neural network architectures: a Deep Feedforward Neural Network (DFNN), unidirectional and bidirectional Recurrent Neural Networks (RNN, Bi-RNN), and unidirectional and bidirectional Long Short-Term Memory networks (LSTM, Bi-LSTM). Model performance was assessed using mean absolute error (MAE), root mean squared error (RMSE), coefficient of determination (R2), and computational runtime. Results indicate that models incorporating temporal dependencies outperform the static feedforward baseline, achieving MAE values as low as 0.521 breaths/min, making them competitive with or lower than previously reported PPG-based approaches. These findings highlight the effectiveness of temporal deep learning models for respiratory rate estimation from PPG-derived numerical features and provide insight into accuracy–efficiency trade-offs relevant to real-time monitoring applications. Full article
Show Figures

Figure 1

19 pages, 2002 KB  
Article
Application of Machine Learning Approach to Classify Human Activity Level Based on Lifelog Data
by Si-Hwa Jeong, Woomin Nam and Keon Chul Park
Sensors 2026, 26(5), 1612; https://doi.org/10.3390/s26051612 - 4 Mar 2026
Viewed by 182
Abstract
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using [...] Read more.
The present paper provides a human activity-level classification model based on the patient’s lifelog collected from wearable devices. During about two months, the heart rate, step count, and calorie consumption for a total of 182 patients were collected from a wearable device. Using the lifelog data, the machine learning models were developed to classify the physical activity status of patients into five levels. Three types of wearable data with heart rate, step count, and calorie consumption were pre-processed as integrated data in time series. A total of 80% of the integrated data was used as the training dataset, and the remaining 20% was used as the test dataset. Sixteen algorithms were evaluated, including 12 traditional machine learning models (SVM, KNN, RF, etc.) and 4 deep learning models (CNN, RNN, etc.), and cross-validation was performed by dividing the training dataset into 5 folds. By changing the parameters required for training, the models with optimal parameters were derived. The performance of the final models with the new patient lifelog data was evaluated, and it was shown that the classification for human activity level based on heart rate and step count can be performed with high accuracy. Full article
(This article belongs to the Special Issue Sensors for Human Activity Recognition: 3rd Edition)
Show Figures

Figure 1

23 pages, 760 KB  
Article
Trajectory Data Publishing Scheme Based on Transformer Decoder and Differential Privacy
by Haiyong Wang and Wei Huang
ISPRS Int. J. Geo-Inf. 2026, 15(3), 106; https://doi.org/10.3390/ijgi15030106 - 3 Mar 2026
Viewed by 149
Abstract
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods [...] Read more.
The proliferation of Location-Based Services (LBSs) has generated vast trajectory datasets that offer immense analytical value but pose critical privacy risks. Achieving an optimal balance between data utility and privacy preservation remains a challenge, a difficulty compounded by the limitations of existing methods in modeling complex, long-term spatiotemporal dependencies. To address this, this paper proposes a trajectory data publishing scheme combining a Transformer decoder with differential privacy. Unlike traditional single-layer approaches, the proposed method establishes a systematic generation–generalization framework. First, a Transformer decoder is integrated into a Generative Adversarial Network (GAN). This architecture mitigates the gradient vanishing issues common in RNN-based models, generating high-fidelity synthetic trajectories that capture long-range correlations while decoupling them from sensitive source data. Second, to provide rigorous privacy guarantees, a clustering-based generalization strategy is implemented, utilizing Exponential and Laplace mechanisms to ensure ϵ-differential privacy. Experiments on the Geolife and Foursquare NYC datasets demonstrate that the scheme significantly outperforms leading baselines, achieving a superior trade-off between privacy protection and data utility. Full article
(This article belongs to the Topic Recent Advances in Security, Privacy, and Trust)
Show Figures

Figure 1

23 pages, 13416 KB  
Article
An Adaptive Ensemble Model Based on Deep Reinforcement Learning for the Prediction of Step-like Landslide Displacement
by Tengfei Gu, Lei Huang, Shunyao Tian, Zhichao Zhang, Huan Zhang and Yanke Zhang
Remote Sens. 2026, 18(5), 761; https://doi.org/10.3390/rs18050761 - 3 Mar 2026
Viewed by 195
Abstract
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation [...] Read more.
Accurate prediction of landslide displacement is crucial for hazard prevention. However, recurrent neural network (RNN) models have limitations in simultaneously capturing lag time and feature importance, and their black-box nature limits their interpretability. Moreover, the performance of single models varies across different deformation stages, especially during acceleration. To address these challenges, we propose an interpretable deep reinforcement learning-based adaptive ensemble (DRL-AE) framework. The method employs Seasonal and Trend decomposition using Loess to separate cumulative displacement into trend and periodic components. Trend and periodic sequences are predicted using double exponential smoothing and three RNN variants, respectively. An improved Convolutional Block Attention Module (ICBAM) enhances periodic feature extraction and provides temporal–spatial interpretability. The Deep Deterministic Policy Gradient algorithm adaptively integrates multi-model predictions in response to evolving environmental conditions. To validate the DRL-AE, a case study is conducted on the Baijiabao landslide in Zigui County, China. The results indicate that the DRL-AE substantially enhances prediction accuracy. For periodic displacement, it reduces MAE by 10.02% and RMSE by 6.65%, and increases R2 by 4.27% compared with the ICBAM-GRU model. The results also confirm the effectiveness of ICBAM in feature extraction, and the generated heatmaps provide intuitive interpretability of the relevant triggering factors. Full article
Show Figures

Figure 1

18 pages, 1168 KB  
Article
A Hybrid Deep Learning Model for Predicting Tuna Distribution Around Drifting Fish Aggregating Devices
by Bo Song, Jian Liu, Tianjiao Zhang and Quanjin Chen
Sustainability 2026, 18(5), 2406; https://doi.org/10.3390/su18052406 - 2 Mar 2026
Viewed by 152
Abstract
Accurate prediction of tuna distribution is essential for sustainable fisheries management. This study develops a two-stage hybrid model combining Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Random Forest (RF) to predict tuna distribution around drifting fish aggregating devices (DFAD) in the [...] Read more.
Accurate prediction of tuna distribution is essential for sustainable fisheries management. This study develops a two-stage hybrid model combining Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Random Forest (RF) to predict tuna distribution around drifting fish aggregating devices (DFAD) in the Western and Central Pacific Ocean (WCPO). Echo-sounder buoy data from DFAD were aggregated into 2° × 2° grid cells and matched with oceanographic variables from the Copernicus Marine Service. Random Forest-based variable importance analysis identified primary productivity (27%), chlorophyll-a (22%), and dissolved oxygen (18%) as the three dominant environmental drivers. The CNN-RNN component extracts spatiotemporal features from multi-layer ocean data, while the RF classifier performs binary classification of tuna aggregation zones (high-yield vs. low-yield). All five models (Decision Tree, RF, CNN, Transformer, and CNN-RNN-RF) were evaluated on 557 samples using 5-fold stratified cross-validation, with each fold further split 80:20 for training and validation. The proposed CNN-RNN-RF model achieved the highest performance with an AUC of 0.830, accuracy of 82.6%, and F1-scores of 86.3% (high-yield) and 76.2% (low-yield), outperforming the best baseline model (RF: AUC 0.761, accuracy 75.4%). Predicted high-yield zones showed strong consistency with fishing log records, demonstrating the potential of integrating echo-sounder data with hybrid deep learning for data-driven tuna fisheries management. Full article
Show Figures

Figure 1

22 pages, 13052 KB  
Article
Enhanced Migratory Biological Echo Extrapolation from Weather Radar Using ISA-LSTM
by Dou Meng, Yunping Liu, Dongli Wu, Zhiliang Deng, Yifu Chen and Chunzhi Wang
Atmosphere 2026, 17(3), 257; https://doi.org/10.3390/atmos17030257 - 28 Feb 2026
Viewed by 171
Abstract
Weather radar provides continuous, large-scale observations of aerial biological activity. However, biological echoes typically exhibit weak signals, sparse distributions, and non-stationary abrupt variations, causing existing extrapolation models to suffer from over-smoothing and loss of detail and making it difficult to capture their short-term [...] Read more.
Weather radar provides continuous, large-scale observations of aerial biological activity. However, biological echoes typically exhibit weak signals, sparse distributions, and non-stationary abrupt variations, causing existing extrapolation models to suffer from over-smoothing and loss of detail and making it difficult to capture their short-term evolution effectively. To address this issue, we propose an Integrated Self-Attention Long Short-Term Memory (ISA-LSTM) model that integrates a self-attention mechanism within the Predictive Recurrent Neural Network (PredRNN) framework. Coupled convolutional modules are introduced to enhance feature interactions between inputs and hidden states, while a spatiotemporal self-attention mechanism improves long-term dependency modeling and local detail preservation. Experiments conducted on 6000 biological echo samples from three weather radars in the Poyang Lake region demonstrate that the proposed model achieves superior extrapolation accuracy and stability compared with existing methods, maintaining a low false-alarm rate for lead times of up to 50 min. The results suggest that ISA-LSTM offers an effective deep learning approach for biological echo extrapolation, with applications in aviation safety and agricultural pest and disease early warning. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

29 pages, 1954 KB  
Review
A Review on Bathymetric Inversion Research Based on Deep Learning Models and Remote Sensing Images
by Delong Liu, Yufeng Shi and Hong Fang
Remote Sens. 2026, 18(5), 720; https://doi.org/10.3390/rs18050720 - 27 Feb 2026
Viewed by 410
Abstract
High-precision inversion of shallow-water depth is crucial to marine resource development, ecological protection, and national defense security. Traditional acoustic detection, LiDAR, and empirical models are limited by high cost, low efficiency, or water quality dependence, struggling to meet people’s growing demand for shallow-water [...] Read more.
High-precision inversion of shallow-water depth is crucial to marine resource development, ecological protection, and national defense security. Traditional acoustic detection, LiDAR, and empirical models are limited by high cost, low efficiency, or water quality dependence, struggling to meet people’s growing demand for shallow-water depth. With the rapid development of theories and technologies such as remote sensing information, computer science, and artificial intelligence, bathymetric inversion based on remote sensing images and deep learning models has become a research hotspot. In this study, journal articles and conference papers were searched in the Web of Science (WOS) and Google Scholar databases using keywords such as “remote sensing image”, “bathymetry”, and “deep learning model”. The publication time of the papers ranges from January 2021 to September 2025. A total of 309 relevant studies were retrieved and, after screening and quality control, 132 core studies were finally selected as the research objects for this review. These studies were classified according to deep learning models, including CNN, U-Net, MLP, and RNN. The study analyzed and summarized the characteristics of different deep learning models in bathymetric inversion, as well as their data source selection, inversion accuracy, and limitations. Additionally, the future development trends were discussed in combination with the latest research results. Full article
(This article belongs to the Special Issue Artificial Intelligence and Big Data for Oceanography (2nd Edition))
Show Figures

Figure 1

46 pages, 7510 KB  
Article
Semantic Modeling of Ship Collision Reports: Ontology Design, Knowledge Extraction, and Severity Classification
by Hongchu Yu, Xiaohan Xu, Zheng Guo, Tianming Wei and Lei Xu
J. Mar. Sci. Eng. 2026, 14(5), 448; https://doi.org/10.3390/jmse14050448 - 27 Feb 2026
Viewed by 369
Abstract
With the expansion of water transportation networks and increasing traffic intensity, maritime accidents have become frequent, posing significant threats to safety and property. This study presents a knowledge graph-driven framework for maritime accident analysis, addressing the limitations of traditional risk analysis methods in [...] Read more.
With the expansion of water transportation networks and increasing traffic intensity, maritime accidents have become frequent, posing significant threats to safety and property. This study presents a knowledge graph-driven framework for maritime accident analysis, addressing the limitations of traditional risk analysis methods in extracting and organizing unstructured accident data. First, a standardized ontology for ship collision accidents is developed, defining core concepts such as event, spatiotemporal behavior, causation, consequence, responsibility, and decision-making. Advanced natural language processing models, including a lexicon-enhanced LEBERT-BiLSTM-CRF and a K-BERT-BiLSTM-CRF incorporating ship collision knowledge triplets, are proposed for named entity recognition and relation extraction, with F1-score improvements of 6.7% and 1.2%, respectively. The constructed accident knowledge graph integrates heterogeneous data, enabling semantic organization and efficient retrieval. Leveraging graph topological features, an accident severity classification model is established, where a graph-feature-driven LSTM-RNN demonstrates robust performance, especially with imbalanced data. Comparative experiments show the superiority of this approach over conventional models such as XGBoost and random forest. Overall, this research demonstrates that knowledge graph-driven methods can significantly enhance maritime accident knowledge extraction and severity classification, providing strong information support and methodological advances for intelligent accident management and prevention. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Graphical abstract

27 pages, 1589 KB  
Review
De Novo Structure Prediction from Tandem Mass Spectra: Algorithms, Benchmarks, and Limitations
by Mark Yu. Schneider, Daniil D. Kholmanskikh, Kirill Ya. Romanov, Elena A. Perekina, Sergei A. Nikolenko, Ruslan Yu. Lukin and Ivan V. Golov
Molecules 2026, 31(5), 769; https://doi.org/10.3390/molecules31050769 - 25 Feb 2026
Viewed by 425
Abstract
The identification of unknown molecules from analytical data remains a fundamental challenge in chemistry, with critical implications for drug discovery, metabolomics, and natural product research. While tandem mass spectrometry provides rich structural fingerprints, most spectra are absent from reference libraries, spurring the development [...] Read more.
The identification of unknown molecules from analytical data remains a fundamental challenge in chemistry, with critical implications for drug discovery, metabolomics, and natural product research. While tandem mass spectrometry provides rich structural fingerprints, most spectra are absent from reference libraries, spurring the development of de novo generative models. However, their true accuracy has been difficult to assess. Our critical analysis reveals that state-of-the-art models achieve only 4.1% top-10 accuracy on rigorously leakage-controlled benchmarks like MassSpecGym. This sobering figure stands in stark contrast to earlier, overly optimistic reports, a discrepancy we attribute to pervasive data leakage in naive data splits. This review traces the field’s rapid evolution through three architectural eras: from fingerprint-conditioned RNN pipelines to end-to-end sequence models and, most recently, to graph-native diffusion under molecular-formula constraints. We demonstrate that explicitly conditioning generative models on a molecular formula significantly improves exact-match accuracy compared to unconstrained baselines. Crucially, our analysis distinguishes between two experimentally relevant paradigms: formula-conditioned generation for true unknown discovery and scaffold-based generation for hypothesis-driven research. While the latter shows high potential with oracle scaffolds, its performance drastically drops with predicted ones, revealing a critical bottleneck. To build the next generation of reliable tools, we propose a clear roadmap centered on standardized, leakage-aware benchmarking and transparent reporting. Full article
(This article belongs to the Special Issue Advances in Computational Spectroscopy, 2nd Edition)
Show Figures

Graphical abstract

30 pages, 4409 KB  
Article
Divergent Trajectories of the Water–Energy–Food Nexus in the Yangtze River Economic Belt
by Yiyang Li, Hongrui Wang, Li Zhang, Hongchong Wang, Yuhan Ding and Xinlong Du
Water 2026, 18(5), 538; https://doi.org/10.3390/w18050538 - 25 Feb 2026
Viewed by 341
Abstract
Unraveling the coupling mechanisms of the Water–Energy–Food (WEF) nexus is critical for regional synergistic security and high-quality development. Using an integrated “relationship identification, equation construction, and scenario prediction” framework, this study characterized the spatiotemporal evolution of WEF interactions in the Yangtze River Economic [...] Read more.
Unraveling the coupling mechanisms of the Water–Energy–Food (WEF) nexus is critical for regional synergistic security and high-quality development. Using an integrated “relationship identification, equation construction, and scenario prediction” framework, this study characterized the spatiotemporal evolution of WEF interactions in the Yangtze River Economic Belt. Under this framework, a Granger causality test coupled with a SHAP interpretability model was first employed to quantify the causal strength among nexus elements, followed by a Bayesian Vector Autoregression model integrated with a hybrid Recurrent Neural Network (RNN) and System Dynamics (SD) approach to simulate evolutionary trajectories from 2024 to 2035. Results showed that: (1) The nexus mechanisms exhibited significant spatial duality. Upstream egg production drove a high virtual water footprint, while inland seafood consumption imposed a non-linear energy premium due to cold-chain dependency. In Shanghai, a strong diesel–groundwater coupling revealed a trade-off between energy input and underground safety. (2) Localized feed cultivation was the core driver for upstream water pressure, whereas logistics intensity was the dominant factor for energy–water interactions in urbanized regions. (3) From 2024 to 2035, the nexus structure will undergo bidirectional divergence. Ecological water demand in the midstream is projected to surge by over 130%, and Anhui’s milk production is forecast to more than double from 107.77 to 225.7 million tons. The findings provide scientific support for coordinating ecological conservation and high-quality development. Full article
(This article belongs to the Special Issue Advanced Perspectives on the Water–Energy–Food Nexus)
Show Figures

Figure 1

25 pages, 896 KB  
Article
Sequential Deep Learning with Feature Compression and Optimal State Estimation for Indoor Visible Light Positioning
by Negasa Berhanu Fite, Getachew Mamo Wegari and Heidi Steendam
Photonics 2026, 13(2), 211; https://doi.org/10.3390/photonics13020211 - 23 Feb 2026
Viewed by 555
Abstract
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received [...] Read more.
Visible Light Positioning (VLP) is widely regarded as a promising technology for high-precision indoor localization due to its immunity to radio-frequency interference and compatibility with existing Light-Emitting Diode (LED) lighting infrastructure. Despite recent progress, current VLP systems remain fundamentally limited by nonlinear received signal strength (RSS) characteristics, unknown transmitter orientations, and dynamic indoor disturbances. Existing solutions typically address these challenges in isolation, resulting in limited robustness and scalability. This paper proposes SCENE-VLP (Sequential Deep Learning with Feature Compression and Optimal State Estimation), a structured positioning framework that integrates feature compression, temporal sequence modeling, and probabilistic state refinement within a unified estimation pipeline. Specifically, SCENE-VLP combines Principal Component Analysis (PCA) and Denoising Autoencoders (DAE) for linear and nonlinear observation conditioning, Gated Recurrent Units (GRU) for modeling temporal dependencies in RSS sequences, and Kalman-based filtering (KF/EKF) for recursive state-space refinement. The framework is formulated as a hierarchical approximation of the nonlinear observation model, linking data-driven measurement learning with Bayesian state estimation. A systematic ablation study across multiple scenarios, including same-dataset evaluation and cross-dataset generalization, demonstrates that each component provides complementary benefits. Feature compression reduces redundancy while preserving dominant signal structure; GRU significantly improves robustness over static regression; and recursive filtering consistently reduces positioning error compared to unfiltered predictions. While both KF and EKF improve performance, EKF provides incremental refinement under mild nonlinearities. Extensive simulations conducted on an indoor dataset collected from a realistic deployment with eight ceiling-mounted LEDs and a single photodetector (PD) show that SCENE-VLP achieves sub-decimeter localization accuracy, with P50 and P95 errors of 1.84 cm and 6.52 cm, respectively. Cross-scenario evaluation further confirms stable generalization and statistically consistent improvements. These results demonstrate that the structured integration of observation conditioning, temporal modeling, and Bayesian refinement yields measurable gains beyond partial pipeline configurations, establishing SCENE-VLP as a robust and scalable solution for next-generation indoor visible light positioning systems. Full article
Show Figures

Figure 1

Back to TopTop