Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,941)

Search Parameters:
Keywords = temporal processes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 11216 KB  
Article
A Multi-Scale Remote Sensing Image Change Detection Network Based on Vision Foundation Model
by Shenbo Liu, Dongxue Zhao and Lijun Tang
Remote Sens. 2026, 18(3), 506; https://doi.org/10.3390/rs18030506 - 4 Feb 2026
Abstract
As a key technology in the intelligent interpretation of remote sensing, remote sensing image change detection aims to automatically identify surface changes from images of the same area acquired at different times. Although vision foundation models have demonstrated outstanding capabilities in image feature [...] Read more.
As a key technology in the intelligent interpretation of remote sensing, remote sensing image change detection aims to automatically identify surface changes from images of the same area acquired at different times. Although vision foundation models have demonstrated outstanding capabilities in image feature representation, their inherent patch-based processing and global attention mechanisms limit their effectiveness in perceiving multi-scale targets. To address this, we propose a multi-scale remote sensing image change detection network based on a vision foundation model, termed SAM-MSCD. This network integrates an efficient parameter fine-tuning strategy with a cross-temporal multi-scale feature fusion mechanism, significantly improving change perception accuracy in complex scenarios. Specifically, the Low-Rank Adaptation mechanism is adopted for parameter-efficient fine-tuning of the Segment Anything Model (SAM) image encoder, adapting it for the remote sensing change detection task. A bi-temporal feature interaction module(BIM) is designed to enhance the semantic alignment and the modeling of change relationships between feature maps from different time phases. Furthermore, a change feature enhancement module (CFEM) is proposed to fuse and highlight differential information from different levels, achieving precise capture of multi-scale changes. Comprehensive experimental results on four public remote sensing change detection datasets, namely LEVIR-CD, WHU-CD, NJDS, and MSRS-CD, demonstrate that SAM-MSCD surpasses current state-of-the-art (SOTA) methods on several key evaluation metrics, including the F1-score and Intersection over Union(IoU), indicating its broad prospects for practical application. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Figure 1

22 pages, 3999 KB  
Article
Eye Movement Classification Using Neuromorphic Vision Sensors
by Khadija Iddrisu, Waseem Shariff, Maciej Stec, Noel O’Connor and Suzanne Little
J. Eye Mov. Res. 2026, 19(1), 17; https://doi.org/10.3390/jemr19010017 - 4 Feb 2026
Abstract
Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to [...] Read more.
Eye movement classification, particularly the identification of fixations and saccades, plays a vital role in advancing our understanding of neurological functions and cognitive processing. Conventional modalities of data, such as RGB webcams, often face limitations such as motion blur, latency and susceptibility to noise. Neuromorphic Vision Sensors, also known as event cameras (ECs), capture pixel-level changes asynchronously and at a high temporal resolution, making them well suited for detecting the swift transitions inherent to eye movements. However, the resulting data are sparse, which makes them less well suited for use with conventional algorithms. Spiking Neural Networks (SNNs) are gaining attention due to their discrete spatio-temporal spike mechanism ideally suited for sparse data. These networks offer a biologically inspired computational paradigm capable of modeling the temporal dynamics captured by event cameras. This study validates the use of Spiking Neural Networks (SNNs) with event cameras for efficient eye movement classification. We manually annotated the EV-Eye dataset, the largest publicly available event-based eye-tracking benchmark, into sequences of saccades and fixations, and we propose a convolutional SNN architecture operating directly on spike streams. Our model achieves an accuracy of 94% and a precision of 0.92 across annotated data from 10 users. As the first work to apply SNNs to eye movement classification using event data, we benchmark our approach against spiking baselines such as SpikingVGG and SpikingDenseNet, and additionally provide a detailed computational complexity comparison between SNN and ANN counterparts. Our results highlight the efficiency and robustness of SNNs for event-based vision tasks, with over one order of magnitude improvement in computational efficiency, with implications for fast and low-power neurocognitive diagnostic systems. Full article
Show Figures

Figure 1

17 pages, 784 KB  
Article
A Wideband Oscillation Classification Method Based on Multimodal Feature Fusion
by Yingmin Zhang, Yixiong Liu, Zongsheng Zheng and Shilin Gao
Electronics 2026, 15(3), 682; https://doi.org/10.3390/electronics15030682 - 4 Feb 2026
Abstract
With the increasing penetration of renewable energy sources and power-electronic devices, modern power systems exhibit pronounced wideband oscillation characteristics with large frequency spans, strong modal coupling, and significant time-varying behaviors. Accurate identification and classification of wideband oscillation patterns have therefore become critical challenges [...] Read more.
With the increasing penetration of renewable energy sources and power-electronic devices, modern power systems exhibit pronounced wideband oscillation characteristics with large frequency spans, strong modal coupling, and significant time-varying behaviors. Accurate identification and classification of wideband oscillation patterns have therefore become critical challenges for ensuring the secure and stable operation of “dual-high” power systems. Existing methods based on signal processing or single-modality deep-learning models often fail to fully exploit the complementary information embedded in heterogeneous data representations, resulting in limited performance when dealing with complex oscillation patterns.To address these challenges, this paper proposes a multimodal attention-based fusion network for wideband oscillation classification. A dual-branch deep-learning architecture is developed to process Gramian Angular Difference Field images and raw time-series signals in parallel, enabling collaborative extraction of global structural features and local temporal dynamics. An improved Inception module is employed in the image branch to enhance multi-scale spatial feature representation, while a gated recurrent unit network is utilized in the time-series branch to model dynamic evolution characteristics. Furthermore, an attention-based fusion mechanism is introduced to adaptively learn the relative importance of different modalities and perform dynamic feature aggregation. Extensive experiments are conducted using a dataset constructed from mathematical models and engineering-oriented simulations. Comparative studies and ablation studies demonstrate that the proposed method significantly outperforms conventional signal-processing-based approaches and single-modality deep-learning models in terms of classification accuracy, robustness, and generalization capability. The results confirm the effectiveness of multimodal feature fusion and attention mechanisms for accurate wideband oscillation classification, providing a promising solution for advanced power system monitoring and analysis. Full article
Show Figures

Figure 1

22 pages, 1659 KB  
Article
Lightweight Depression Detection Using 3D Facial Landmark Pseudo-Images and CNN-LSTM on DAIC-WOZ and E-DAIC
by Achraf Jallaglag, My Abdelouahed Sabri, Ali Yahyaouy and Abdellah Aarab
BioMedInformatics 2026, 6(1), 8; https://doi.org/10.3390/biomedinformatics6010008 - 4 Feb 2026
Abstract
Background: Depression is a common mental disorder, and early and objective diagnosis of depression is challenging. New advances in deep learning show promise for processing audio and video content when screening for depression. Nevertheless, the majority of current methods rely on raw video [...] Read more.
Background: Depression is a common mental disorder, and early and objective diagnosis of depression is challenging. New advances in deep learning show promise for processing audio and video content when screening for depression. Nevertheless, the majority of current methods rely on raw video processing or multimodal pipelines, which are computationally costly and challenging to understand and create privacy issues, restricting their use in actual clinical settings. Methods: Based solely on spatiotemporal 3D face landmark representations, we describe a unique, totally visual, and lightweight deep learning approach to overcome these constraints. In this paper we introduce, for the first time, a pure visual deep learning framework, based on spatiotemporal 3D facial landmarks extracted from clinical interview videos contained in the DAIC-WOZ and Extended DAIC-WOZ (E-DAIC) datasets. Our method does not use raw video or any type of semi-automated multimodal fusion. Whereas raw video streaming can be computationally expensive and is not well suited to investigating specific variables, we first take a temporal series of 3D landmarks, convert them to pseudo-images (224 × 224 × 3), and then use them within a CNN-LSTM framework. Importantly, CNN-LSTM provides the ability to analyze the spatial configuration and temporal dimensions of facial behavior. Results: The experimental results indicate macro-average F1 scores of 0.74 on DAIC-WOZ and 0.762 on E-DAIC, demonstrating robust performance under heavy class imbalances, with a variability of ±0.03 across folds. Conclusion: These results indicate that landmark-based spatiotemporal modeling represents the future of lightweight, interpretable, and scalable automatic depression detection. Second, our results suggest exciting opportunities for completely embedding ADI systems within the framework of real-world MHA. Full article
Show Figures

Graphical abstract

21 pages, 3762 KB  
Article
Motion Strategy Generation Based on Multimodal Motion Primitives and Reinforcement Learning Imitation for Quadruped Robots
by Qin Zhang, Guanglei Li, Benhang Liu, Chenxi Li, Chuanle Zhu and Hui Chai
Biomimetics 2026, 11(2), 115; https://doi.org/10.3390/biomimetics11020115 - 4 Feb 2026
Abstract
With the advancement of task-oriented reinforcement learning (RL), the capability of quadruped robots for motion generation and complex task completion has significantly improved. However, current control strategies require extensive domain expertise and time-consuming design processes to acquire operational skills and achieve multi-task motion [...] Read more.
With the advancement of task-oriented reinforcement learning (RL), the capability of quadruped robots for motion generation and complex task completion has significantly improved. However, current control strategies require extensive domain expertise and time-consuming design processes to acquire operational skills and achieve multi-task motion control, often failing to effectively manage complex behaviors composed of multiple coordinated actions. To address these limitations, this paper proposes a motion policy generation method for quadruped robots based on multimodal motion primitives and imitation learning. A multimodal motion library was constructed using 3D engine motion design, motion capture data retargeting, and trajectory planning. A temporal domain-based behavior planner was designed to combine these primitives and generate complex behaviors. We developed a RL-based imitation learning training framework to achieve precise trajectory tracking and rapid policy deployment, ensuring the effective application of actions/behaviors on the quadruped platform. Simulation and physical experiments conducted on the Lite3 quadruped robot validated the efficacy of the proposed approach, offering a new paradigm for the deployment and development of motion strategies for quadruped robots. Full article
(This article belongs to the Section Locomotion and Bioinspired Robotics)
Show Figures

Figure 1

23 pages, 2375 KB  
Article
Transformer-Based Dynamic Flame Image Analysis for Real-Time Carbon Content Prediction in BOF Steelmaking
by Hao Yang, Meixia Fu, Wei Li, Lei Sun, Qu Wang, Na Chen, Ronghui Zhang, Zhenqian Wang, Yifan Lu, Zhangchao Ma and Jianquan Wang
Metals 2026, 16(2), 185; https://doi.org/10.3390/met16020185 - 4 Feb 2026
Abstract
Accurately predicting molten steel carbon content plays a crucial role in improving productivity and energy efficiency during the Basic Oxygen Furnace (BOF) steelmaking process. However, current data-driven methods primarily focus on endpoint carbon content prediction, while lacking sufficient investigation into real-time curve forecasting [...] Read more.
Accurately predicting molten steel carbon content plays a crucial role in improving productivity and energy efficiency during the Basic Oxygen Furnace (BOF) steelmaking process. However, current data-driven methods primarily focus on endpoint carbon content prediction, while lacking sufficient investigation into real-time curve forecasting during the blowing process, which hinders real-time closed-loop BOF control. In this article, a novel Transformer-based framework is presented for real-time carbon content prediction. The contributions include three main aspects. First, the prediction paradigm is reconstructed by converting the regression task into a sequence classification task, which demonstrates superior robustness and accuracy compared to traditional regression methods. Second, the focus is shifted from traditional endpoint-only forecasting to long-term prediction by introducing a Transformer-based model for continuous, real-time prediction of carbon content. Last, spatial–temporal feature representation is enhanced by integrating an optical flow channel with the original RGB channels, and the resulting four-channel input tensor effectively captures the dynamic characteristics of the converter mouth flame. Experimental results on an independent test dataset demonstrate favorable performance of the proposed framework in predicting carbon content trajectories. The model achieves high accuracy, reaching 84% during the critical decarburization endpoint phase where carbon content decreases from 0.0829 to 0.0440, and delivers predictions with approximately 75% of errors within ±0.05. Such performance demonstrates the practical potential for supporting intelligent BOF steelmaking. Full article
23 pages, 2302 KB  
Article
Learnable Feature Disentanglement with Temporal-Complemented Motion Enhancement for Micro-Expression Recognition
by Yu Qian, Shucheng Huang and Kai Qu
Entropy 2026, 28(2), 180; https://doi.org/10.3390/e28020180 - 4 Feb 2026
Abstract
Micro-expressions (MEs) are involuntary facial movements that reveal genuine emotions, holding significant value in fields like deception detection and psychological diagnosis. However, micro-expression recognition (MER) is fundamentally challenged by the entanglement of subtle emotional motions with identity-specific features. Traditional methods, such as those [...] Read more.
Micro-expressions (MEs) are involuntary facial movements that reveal genuine emotions, holding significant value in fields like deception detection and psychological diagnosis. However, micro-expression recognition (MER) is fundamentally challenged by the entanglement of subtle emotional motions with identity-specific features. Traditional methods, such as those based on Robust Principal Component Analysis (RPCA), attempt to separate identity and motion components through fixed preprocessing and coarse decomposition. However, these methods can inadvertently remove subtle emotional cues and are disconnected from subsequent module training, limiting the discriminative power of features. Inspired by the Bruce–Young model of facial cognition, which suggests that facial identity and expression are processed via independent neural routes, we recognize the need for a more dynamic, learnable disentanglement paradigm for MER. We propose LFD-TCMEN, a novel network that introduces an end-to-end learnable feature disentanglement framework. The network is synergistically optimized by a multi-task objective unifying orthogonality, reconstruction, consistency, cycle, identity, and classification losses. Specifically, the Disentangle Representation Learning (DRL) module adaptively isolates pure motion patterns from subject-specific appearance, overcoming the limitations of static preprocessing, while the Temporal-Complemented Motion Enhancement (TCME) module integrates purified motion representations—highlighting subtle facial muscle activations—with optical flow dynamics to comprehensively model the spatiotemporal evolution of MEs. Extensive experiments on CAS(ME)3 and DFME benchmarks demonstrate that our method achieves state-of-the-art cross-subject performance, validating the efficacy of the proposed learnable disentanglement and synergistic optimization. Full article
Show Figures

Figure 1

25 pages, 727 KB  
Article
Migraine and Epilepsy Discrimination Using DTCWT and Random Subspace Ensemble Classifier
by Tuba Nur Subasi and Abdulhamit Subasi
Mach. Learn. Knowl. Extr. 2026, 8(2), 35; https://doi.org/10.3390/make8020035 - 4 Feb 2026
Abstract
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, [...] Read more.
Migraine and epilepsy are common neurological disorders that share overlapping symptoms, such as visual disturbances and altered consciousness, making accurate diagnosis challenging. Although their underlying mechanisms differ, both conditions involve recurrent irregular brain activity, and traditional EEG-based diagnosis relies heavily on clinical interpretation, which may be subjective and insufficient for clear differentiation. To address this challenge, this study introduces an automated EEG classification framework combining Dual Tree Complex Wavelet Transform (DTCWT) for feature extraction with a Random Subspace Ensemble Classifier for multi-class discrimination. EEG data recorded under photic and nonphotic stimulation were analyzed to capture both temporal and frequency characteristics. DTCWT proved effective in modeling the non-stationary nature of EEG signals and extracting condition-specific features, while the ensemble classifier improved generalization by training multiple models on diverse feature subsets. The proposed system achieved an average accuracy of 99.50%, along with strong F-measure, AUC, and Kappa scores. Notably, although previous studies suggest heightened EEG activity in migraine patients during flash stimulation, findings here indicate that flash stimulation alone does not reliably distinguish migraine from epilepsy. Overall, this research highlights the promise of advanced signal processing and machine learning techniques in enhancing diagnostic precision for complex neurological disorders. Full article
(This article belongs to the Section Learning)
44 pages, 5542 KB  
Article
A Novel Probabilistic Model for Streamflow Analysis and Its Role in Risk Management and Environmental Sustainability
by Tassaddaq Hussain, Enrique Villamor, Mohammad Shakil, Mohammad Ahsanullah and Bhuiyan Mohammad Golam Kibria
Axioms 2026, 15(2), 113; https://doi.org/10.3390/axioms15020113 - 4 Feb 2026
Abstract
Probabilistic streamflow models play a pivotal role in quantifying hydrological uncertainty and form the backbone of modern risk management strategies for flood and drought forecasting, water allocation planning, and the design of resilient infrastructure. Unlike deterministic approaches that yield single-point estimates, these models [...] Read more.
Probabilistic streamflow models play a pivotal role in quantifying hydrological uncertainty and form the backbone of modern risk management strategies for flood and drought forecasting, water allocation planning, and the design of resilient infrastructure. Unlike deterministic approaches that yield single-point estimates, these models provide a spectrum of possible outcomes, enabling a more realistic assessment of extreme events and supporting informed, sustainable water resource decisions. By explicitly accounting for natural variability and uncertainty, probabilistic models promote transparent, robust, and equitable risk evaluations, helping decision-makers balance economic costs, societal benefits, and environmental protection for long-term sustainability. In this study, we introduce the bounded half-logistic distribution (BHLD), a novel heavy-tailed probability model constructed using the T–Y method for distribution generation, where T denotes a transformer distribution and Y represents a baseline generator. Although the BHLD is conceptually related to the Pareto and log-logistic families, it offers several distinctive advantages for streamflow modeling, including a flexible hazard rate that can be unimodal or monotonically decreasing, a finite lower bound, and closed-form expressions for key risk measures such as Value at Risk (VaR) and Tail Value at Risk (TVaR). The proposed distribution is defined on a lower-bounded domain, allowing it to realistically capture physical constraints inherent in flood processes, while a log-logistic-based tail structure provides the flexibility needed to model extreme hydrological events. Moreover, the BHLD is analytically characterized through a governing differential equation and further examined via its characteristic function and the maximum entropy principle, ensuring stable and efficient parameter estimation. It integrates a half-logistic generator with a log-logistic baseline, yielding a power-law tail decay governed by the parameter β, which is particularly effective for representing extreme flows. Fundamental properties, including the hazard rate function, moments, and entropy measures, are derived in closed form, and model parameters are estimated using the maximum likelihood method. Applied to four real streamflow data sets, the BHLD demonstrates superior performance over nine competing distributions in goodness-of-fit analyses, with notable improvements in tail representation. The model facilitates accurate computation of hydrological risk metrics such as VaR, TVaR, and tail variance, uncovering pronounced temporal variations in flood risk and establishing the BHLD as a powerful and reliable tool for streamflow modeling under changing environmental conditions. Full article
(This article belongs to the Special Issue Probability Theory and Stochastic Processes: Theory and Applications)
35 pages, 7867 KB  
Article
Inter-Comparison of Deep Learning Models for Flood Forecasting in Ethiopia’s Upper Awash Basin
by Girma Moges Mengistu, Addisu G. Semie, Gulilat T. Diro, Natei Ermias Benti, Emiola O. Gbobaniyi and Yonas Mersha
Water 2026, 18(3), 397; https://doi.org/10.3390/w18030397 - 3 Feb 2026
Abstract
Flood events driven by climate variability and change pose significant risks for socio-economic activities in the Awash Basin, necessitating advanced forecasting tools. This study benchmarks five deep learning (DL) architectures, Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional [...] Read more.
Flood events driven by climate variability and change pose significant risks for socio-economic activities in the Awash Basin, necessitating advanced forecasting tools. This study benchmarks five deep learning (DL) architectures, Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), Bidirectional LSTM (BiLSTM), and a Hybrid CNN–LSTM, for daily discharge forecasting for the Hombole catchment in the Upper Awash Basin (UAB) using 40 years of hydrometeorological observations (1981–2020). Rainfall, lagged discharge, and seasonal indicators were used as predictors. Model performance was evaluated against two baseline approaches, a conceptual HBV rainfall–runoff model as well as a climatology, using standard and hydrological metrics. Of the two baselines (climatology and HBV), the climatology showed limited skill with large bias and negative NSE, whereas the HBV model achieved moderate skill (NSE = 0.64 and KGE = 0.82). In contrast, all DL models substantially improved predictive performance, achieving test NSE values above 0.83 and low overall bias. Among them, the Hybrid CNN–LSTM provided the most balanced performance, combining local temporal feature extraction with long-term memory and yielding stable efficiency (NSE ≈ 0.84, KGE ≈ 0.90, and PBIAS ≈ −2%) across flow regimes. The LSTM and GRU models performed comparably, offering strong temporal learning and robust daily predictions, while BiLSTM improved flood timing through bidirectional sequence modeling. The CNN captured short-term variability effectively but showed weaker representation of extreme peaks. Analysis of peak-flow metrics revealed systematic underestimation of extreme discharge magnitudes across all models. However, a post-processing flow-regime classification based on discharge quantiles demonstrated high extreme-event detection skill, with deep learning models exceeding 89% accuracy in identifying extreme-flow occurrences on the test set. These findings indicate that, while magnitude errors remain for rare floods, DL models reliably discriminate flood regimes relevant for early warning. Overall, the results show that deep learning models provide clear improvements over climatology and conceptual baselines for daily streamflow forecasting in the UAB, while highlighting remaining challenges in peak-flow magnitude prediction. The study indicates promising results for the integration of deep learning methods into flood early-warning workflows; however, these results could be further improved by adopting a probabilistic forecasting framework that accounts for model uncertainty. Full article
(This article belongs to the Section Hydrology)
Show Figures

Figure 1

22 pages, 8868 KB  
Article
Constructing China’s Annual High-Resolution Gridded GDP Dataset (2000–2021) Using Cross-Scale Feature Extraction and Stacked Ensemble Learning
by Fuliang Deng, Zhicheng Fan, Mei Sun, Shuimei Fu, Xin Cao, Ying Yuan, Wei Liu and Lanhui Li
Sustainability 2026, 18(3), 1558; https://doi.org/10.3390/su18031558 - 3 Feb 2026
Abstract
Gross Domestic Product (GDP) serves as a core indicator for measuring the sustainable economic development of countries and regions. Accurate understanding of its spatio-temporal distribution is crucial for achieving the United Nations Sustainable Development Goals (SDGs). However, current grid-based GDP data for China’s [...] Read more.
Gross Domestic Product (GDP) serves as a core indicator for measuring the sustainable economic development of countries and regions. Accurate understanding of its spatio-temporal distribution is crucial for achieving the United Nations Sustainable Development Goals (SDGs). However, current grid-based GDP data for China’s regions predominantly consists of data from specific years, making it difficult to capture fine-grained changes in economic development. To address this, this study proposes a spatial GDP framework integrating cross-scale feature extraction (CSFs) with stacked ensemble learning. Based on China’s county-level GDP statistics and multi-source auxiliary data, it first generates a density-weighted estimation layer. This is then processed through dasymetric mapping to produce China’s Annual High-Resolution Gridded GDP Dataset (CA_GDP) from 2000 to 2021. Evaluation demonstrates the framework’s superior performance in density weight estimation, achieving an R2 of 0.82 against statistical data. Compared to traditional single models like Random Forests (RF), it improves R2 by 13–54%, reduces mean absolute error (MAE) by 2–26%, and lowers root mean square error (RMSE) by 19–39%, with these advantages remaining stable across time series. The dasymetric mapping of the CA_GDP dataset clearly depicts the economic development patterns and urban agglomeration effects in the southeastern coastal regions, as well as the relatively lagging economic development in western areas. Compared to existing public datasets, CA_GDP offers significant advantages in reflecting the fine-grained economic spatial structure within county-level units, providing a more reliable data foundation for identifying regional economic disparities, policy formulation and evaluation, and related research. Full article
15 pages, 4087 KB  
Article
Automatic Identification of Lower-Limb Neuromuscular Activation Patterns During Gait Using a Textile Wearable Multisensor System
by Federica Amitrano, Armando Coccia, Federico Colelli Riano, Gaetano Pagano, Arcangelo Biancardi, Ernesto Losavio and Giovanni D’Addio
Sensors 2026, 26(3), 997; https://doi.org/10.3390/s26030997 - 3 Feb 2026
Abstract
Wearable sensing technologies are increasingly used to assess neuromuscular function during daily-life activities. This study presents and evaluates a multisensor wearable system integrating a textile-based surface Electromyography (sEMG) sleeve and a pressure-sensing insole for monitoring Tibialis Anterior (TA) and Gastrocnemius Lateralis (GL) activation [...] Read more.
Wearable sensing technologies are increasingly used to assess neuromuscular function during daily-life activities. This study presents and evaluates a multisensor wearable system integrating a textile-based surface Electromyography (sEMG) sleeve and a pressure-sensing insole for monitoring Tibialis Anterior (TA) and Gastrocnemius Lateralis (GL) activation during gait. Eleven healthy adults performed overground walking trials while synchronised sEMG and plantar pressure signals were collected and processed using a dedicated algorithm for detecting activation intervals across gait cycles. All participants completed the walking protocol without discomfort, and the system provided stable recordings suitable for further analysis. The detected activation patterns showed one to four bursts per gait cycle, with consistent TA activity in terminal swing and GL activity in mid- to terminal stance. Additional short bursts were observed in early stance, pre-swing, and mid-stance depending on the pattern. The area under the sEMG envelope and the temporal features of each burst exhibited both inter- and intra-subject variability, consistent with known physiological modulation of gait-related muscle activity. The results demonstrate the feasibility of the proposed multisensor system for characterising muscle activation during walking. Its comfort, signal quality, and ease of integration encourage further applications in clinical gait assessment and remote monitoring. Future work will focus on system optimisation, simplified donning procedures, and validation in larger cohorts and populations with gait impairments. Full article
(This article belongs to the Special Issue Advancing Human Gait Monitoring with Wearable Sensors)
Show Figures

Figure 1

12 pages, 827 KB  
Proceeding Paper
Mine Water Inrush Propagation Modeling and Evacuation Route Optimization
by Xuemei Yu, Hongguan Wu, Jingyi Pan and Yihang Liu
Eng. Proc. 2025, 120(1), 40; https://doi.org/10.3390/engproc2025120040 - 3 Feb 2026
Viewed by 26
Abstract
We modeled water inrush propagation in mines and the optimization of evacuation routes. By constructing a water flow model, the propagation process of water flow through the tunnel network is simulated to explore branching, superposition, and water level changes. The model was constructed [...] Read more.
We modeled water inrush propagation in mines and the optimization of evacuation routes. By constructing a water flow model, the propagation process of water flow through the tunnel network is simulated to explore branching, superposition, and water level changes. The model was constructed based on breadth-first search (BFS) and a time-stepping algorithm. Furthermore, by integrating Dijkstra’s algorithm with a spatio-temporal expanded graph, miners’ evacuation routes were planned, optimizing travel time and water level risk. In scenarios with multiple water inrush points, we developed a multi-source asynchronous model that enhances route safety and real-time performance, enabling efficient emergency response during mine water disasters. For Problem 1 defined in this study, a graph structure and BFS algorithm were used to calculate the filling time of tunnels at a single water inrush point. For Problem 2, we combined the water propagation model with dynamic evacuation route planning, realizing dynamic escape via a spatio-temporal state network and Dijkstra’s algorithm. For Problem 3, we constructed a multi-source asynchronous water inrush dynamic network model to determine the superposition and propagation of water flows from multiple inrush points. For Problem 4, we established a multi-objective evacuation route optimization model, utilizing a time-expanded graph and a dynamic Dijkstra’s algorithm to integrate travel time and water level risk for personalized evacuation decision-making. Full article
(This article belongs to the Proceedings of 8th International Conference on Knowledge Innovation and Invention)
Show Figures

Figure 1

23 pages, 15685 KB  
Article
Multi-Stage Temporal Learning for Climate-Resilient Photovoltaic Forecasting During ENSO Transitions
by Xin Wen, Zhuoqun Li, Xiang Dou, Weimiao Zhang and Jiaqi Liu
Energies 2026, 19(3), 791; https://doi.org/10.3390/en19030791 - 3 Feb 2026
Viewed by 29
Abstract
Accurate photovoltaic (PV) power forecasting under extreme weather conditions remains challenging due to the non-stationary and multi-modal nature of meteorological influences. This study proposes a novel four-stage learning framework integrating signal decomposition, hyperparameter optimization, temporal dependency learning, and residual compensation to enhance forecasting [...] Read more.
Accurate photovoltaic (PV) power forecasting under extreme weather conditions remains challenging due to the non-stationary and multi-modal nature of meteorological influences. This study proposes a novel four-stage learning framework integrating signal decomposition, hyperparameter optimization, temporal dependency learning, and residual compensation to enhance forecasting resilience during El Niño–Southern Oscillation (ENSO) climate transitions. The framework employs CEEMDAN for fluctuation mode decoupling, TOC for global hyperparameter optimization, Transformer model for spatiotemporal dependency learning, and EEMD-GRU for error correction. Experimental validation utilized a comprehensive dataset from Australia’s Yulara power station comprising 104,269 samples at 5 min resolution throughout 2024, covering a complete ENSO transition period. Compared against baseline Transformer model and CNN-BiLSTM models, the proposed framework achieved nRMSE of 1.08%, 7.04%, and 2.81% under sunny, rainy, and sandstorm conditions, respectively, with corresponding R2 values of 0.99981, 0.99782, and 0.99947. Cross-year validation (2023 to 2025) demonstrated maintained performance with nRMSE ranging from 4.68% to 15.88% across different temporal splits. The framework’s modular architecture enables targeted handling of distinct physical processes governing different weather regimes, providing a structured approach for climate-resilient PV forecasting that maintains 2.56% energy consistency error while adapting to rapid meteorological shifts. Full article
(This article belongs to the Section F5: Artificial Intelligence and Smart Energy)
Show Figures

Figure 1

23 pages, 13805 KB  
Article
Systemic Inflammation Aggravates Retinal Ganglion Cell Vulnerability to Optic Nerve Trauma in Adult Rats
by Giuseppe Rovere, Yolanda Caja-Matas, Beatriz Vidal-Villegas, José M. Bernal-Garro, Paloma Sobrado-Calvo, Manuel Salinas-Navarro, Carlo Nucci, María Paz Villegas-Pérez, Manuel Vidal-Sanz, Marta Agudo-Barriuso and Francisco M. Nadal-Nicolás
Int. J. Mol. Sci. 2026, 27(3), 1502; https://doi.org/10.3390/ijms27031502 - 3 Feb 2026
Viewed by 40
Abstract
Systemic inflammation is increasingly recognized as a modifier of neurodegenerative outcomes in the central nervous system; however, its impact on retinal ganglion cell (RGC) survival and retinal microglial responses following optic nerve (ON) injury in vivo remains incompletely understood. In this study, we [...] Read more.
Systemic inflammation is increasingly recognized as a modifier of neurodegenerative outcomes in the central nervous system; however, its impact on retinal ganglion cell (RGC) survival and retinal microglial responses following optic nerve (ON) injury in vivo remains incompletely understood. In this study, we investigated how systemic lipopolysaccharide (LPS)-induced inflammation influences retinal microglial activation and RGC vulnerability under physiological conditions and after traumatic ON damage. In adult female rats, systemic LPS administration by intraperitoneal injection induced rapid and robust microglial activation, characterized by process retraction and soma hypertrophy within hours and promoting microglial proliferation at later stages but without causing RGC loss in intact retinas. Following ON crush, systemic inflammation did not affect early RGC degeneration but significantly exacerbated neuronal loss during the late acute phase. This increased vulnerability was accompanied by a marked rise in microglial density and a pronounced redistribution of microglia toward the central retina and the ON head, a region of heightened anatomical and metabolic susceptibility. Together, these findings demonstrate that, in rats, systemic inflammation alone is insufficient to induce RGC degeneration but acts as a potent priming factor that amplifies neurodegeneration in the context of axonal injury. The temporal and spatial specificity of microglial responses underscores their context-dependent role in retinal pathology and identifies systemic inflammatory status as a critical determinant of retinal outcome after trauma. Targeted, time-dependent modulation of microglial activation may therefore represent a promising therapeutic strategy for optic neuropathies. Full article
Show Figures

Figure 1

Back to TopTop