Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (5,869)

Search Parameters:
Keywords = LSTM prediction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1496 KB  
Article
A Decomposition-Based Deep Learning Model for Multivariate Water Quality Prediction
by Qiliang Zhu, Xueting Yu and Hongtao Fu
Sustainability 2026, 18(8), 4129; https://doi.org/10.3390/su18084129 (registering DOI) - 21 Apr 2026
Abstract
The extensive deployment of automatic water quality monitoring stations has generated substantial volumes of time-series data. Effectively utilizing these data is crucial for enhancing prediction accuracy. To address the limitations of existing models in capturing complex inter-indicator relationships and multi-scale temporal features, this [...] Read more.
The extensive deployment of automatic water quality monitoring stations has generated substantial volumes of time-series data. Effectively utilizing these data is crucial for enhancing prediction accuracy. To address the limitations of existing models in capturing complex inter-indicator relationships and multi-scale temporal features, this paper proposes a hybrid prediction model integrating time series decomposition with deep learning techniques. Adopting a “decomposition–prediction–reconstruction” paradigm, the model first decomposes the raw time series into trend, seasonal, and residual components using STL (Seasonal–Trend decomposition using LOESS). For the trend component, an improved Graph Convolutional Network (GCN) is designed to explicitly model the spatial dependencies among different water quality indicators. For the seasonal component, the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method is employed for multi-scale signal analysis, followed by a coupled Long Short-Term Memory–Convolutional Neural Network (LSTM-CNN) unit to capture both long-term dependencies and local features. To validate the efficacy of the proposed model, experiments were conducted on three real-world water quality datasets from different watersheds. Experimental results demonstrate that the proposed model outperforms mainstream baseline models, including StemGCN, LSTM-CNN, CEEMDAN-LSTM-CNN, and Attention-CLX. Across the three datasets, the model consistently outperforms the best-performing baseline, achieving reductions in MAE ranging from 13.8% to 24.5% and up to a 45.3% reduction in RMSE on a single dataset, while the highest correlation coefficient between predicted and observed values reaches 0.855. These findings demonstrate that the proposed decomposition–integration framework effectively enhances the accuracy and stability of multivariate water quality prediction, offering a promising tool for supporting sustainable water resource management. Full article
(This article belongs to the Special Issue Advances in Management of Hydrology, Water Resources and Ecosystem)
Show Figures

Figure 1

29 pages, 1828 KB  
Article
MSTFNet: Multi-Scale Temporal Fusion Network with Frequency-Enhanced Attention for Financial Time Series Forecasting
by Qian Xia and Wenhao Kang
Mathematics 2026, 14(8), 1391; https://doi.org/10.3390/math14081391 (registering DOI) - 21 Apr 2026
Abstract
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism [...] Read more.
Financial time series forecasting remains a persistent challenge due to the non-stationary nature, inherent noise, and multi-scale temporal dependencies present in market data. This paper presents MSTFNet, a multi-scale temporal fusion network that combines dilated causal convolutions with a frequency-enhanced sparse attention mechanism for improved financial prediction. The proposed architecture consists of three core components: a multi-scale dilated causal convolution module that extracts temporal patterns across different time horizons through parallel convolutional branches with varying dilation rates, a frequency-enhanced sparse attention mechanism that leverages Fast Fourier Transform to identify dominant periodic components and modulate attention weights accordingly, and an adaptive scale fusion gate that learns to dynamically combine representations from multiple temporal scales. Extensive experiments conducted on three public financial datasets (S&P 500, CSI 300, and NASDAQ Composite) spanning the period from January 2015 to December 2024 show two key results. First, consistent with near-efficient markets, the random-walk benchmark (y^t+1=yt) outperforms all the data-driven models on level-error metrics (MAE, RMSE, MAPE, and R2), establishing the martingale as the binding lower bound on point-prediction error. Second, MSTFNet achieves the highest directional accuracy (DA) across all three indices—56.3% on the S&P 500 versus 50.0% for the martingale—representing a 6.3 percentage-point improvement that generates positive pre-cost returns in a trading strategy backtest. Among the eight data-driven baselines (LSTM, GRU, TCN, Transformer, Autoformer, FEDformer, PatchTST, and iTransformer), MSTFNet also achieves the lowest MAE, reducing it by 13.6% relative to the strongest data-driven baseline (iTransformer) on the S&P 500. These results confirm that integrating multi-scale temporal modeling with frequency-domain guidance extracts a real, if modest, directional signal from financial time series. Full article
47 pages, 7226 KB  
Article
Temporal and Behaviour-Aware Multimodal Modelling for Hour-Ahead Hypoglycaemia Prediction During Ramadan Fasting in Type 1 Diabetes
by Mais Alkhateeb, Rawan AlSaad, Samir Brahim Belhaouari, Sarah Aziz, Arfan Ahmed, Hamda Ali, Dabia Al-Mohanadi, Kawsar Mohamud, Najla Al-Naimi, Arwa Alsaud, Hamad Al-Sharshani, Javaid I. Sheikh, Khaled Baagar and Alaa Abd-Alrazaq
Sensors 2026, 26(8), 2552; https://doi.org/10.3390/s26082552 (registering DOI) - 21 Apr 2026
Abstract
Ramadan fasting substantially alters meal timing, sleep patterns, and daily activity, thereby increasing the risk of hypoglycaemia in adults with type 1 diabetes (T1D). Although continuous glucose monitoring (CGM) systems provide real-time alerts, these are largely reactive or limited to short prediction horizons, [...] Read more.
Ramadan fasting substantially alters meal timing, sleep patterns, and daily activity, thereby increasing the risk of hypoglycaemia in adults with type 1 diabetes (T1D). Although continuous glucose monitoring (CGM) systems provide real-time alerts, these are largely reactive or limited to short prediction horizons, offering insufficient warning under fasting-related behavioural and circadian disruption. This study aims to evaluate whether behaviour-aware, temporally enriched recurrent deep learning models, leveraging multimodal CGM and wearable-derived signals, can forecast hypoglycaemia one hour ahead during Ramadan and the post-fasting period. In an observational, free-living cohort study conducted in Qatar, 33 adults with T1D were monitored using CGM and a wrist-worn wearable during Ramadan 2023 and the subsequent month. Multimodal data were aggregated into hourly features and organised into rolling 36 h sequences. In addition to physiological signals, explicit temporal and circadian proxy features were engineered, including cyclic time encodings, day–night indicators, and Ramadan-specific behavioural windows (e.g., pre-iftar, iftar, post-iftar, and fasting phases). Recurrent models, including LSTM and BiLSTM architectures, were trained using patient-wise, leak-free splits, with focal loss applied to address class imbalance. Model performance was evaluated on a held-out, naturally imbalanced test set using ROC AUC, precision–recall AUC, recall, and probability calibration, alongside cross-phase evaluation between Ramadan and post-fasting periods. Following quality control, 1164 participant-days were retained, with hypoglycaemia accounting for approximately 4% of hourly observations. Temporal feature enrichment and the use of a 36 h lookback window improved both discrimination and calibration, with performance stabilizing beyond this horizon. On the imbalanced test set, the best-performing multimodal model achieved an ROC AUC of 0.867 and a precision–recall AUC of 0.341, identifying 77% of next-hour hypoglycaemic events at a sensitivity-focused operating point (precision = 0.14). The selected BiLSTM model demonstrated good probability calibration (Brier score ≈ 0.03). Models trained using wearable-derived inputs alone achieved comparable discrimination and, in some configurations, higher precision–recall AUC than CGM-only baselines. Notably, models trained on the original imbalanced data outperformed resampled variants, suggesting that temporal and behavioural features provided sufficient discriminatory signal without requiring aggressive class balancing. Cross-phase evaluation indicated robust generalisation, particularly for the BiLSTM model. Overall, behaviour-aware, temporally enriched multimodal models can provide calibrated, hour-ahead hypoglycaemia risk estimates during Ramadan fasting in adults with T1D, enabling proactive intervention beyond reactive CGM alerts. Explicit modelling of circadian and behavioural dynamics enhances predictive performance under real-world class imbalance. Furthermore, integrating wearable-derived behavioural and physiological signals adds predictive value beyond CGM alone, supporting robustness across varying levels of contextual data availability. External validation and prospective clinical evaluation are required prior to deployment. Full article
(This article belongs to the Special Issue AI and Big Data Analytics for Medical E-Diagnosis)
Show Figures

Graphical abstract

38 pages, 4749 KB  
Article
Load Prediction Method for the Elastic Tooth Drum-Type Pepper Harvester Based on GARCH-KPCA-ATLSTM
by Jianglong Zhang, Jin Lei, Xinyan Qin, Lijian Lu, Zhi Wang and Jiaxuan Yang
Appl. Sci. 2026, 16(8), 4021; https://doi.org/10.3390/app16084021 (registering DOI) - 21 Apr 2026
Abstract
The load of the elastic tooth drum-type pepper harvester is a key parameter affecting harvesting efficiency and quality. Real-time analysis and prediction of drum load are crucial for stabilizing harvester operation and optimizing performance. Existing research focuses on either machine vision-based image analysis, [...] Read more.
The load of the elastic tooth drum-type pepper harvester is a key parameter affecting harvesting efficiency and quality. Real-time analysis and prediction of drum load are crucial for stabilizing harvester operation and optimizing performance. Existing research focuses on either machine vision-based image analysis, which is difficult to collect in the field, or parameter-mapping methods, which suffer from time lag. This study proposes a GARCH-KPCA-ATLSTM method for load prediction, combining the generalized autoregressive conditional heteroskedasticity (GARCH) model, kernel principal component analysis (KPCA), and attention-enhanced long short-term memory (ATLSTM). EMD is first applied to denoise and reconstruct the load signal, removing mechanical vibration and other interferences. Conditional heteroskedasticity is confirmed, and the GARCH series (one symmetric and three asymmetric models) is introduced to extract fluctuation features. KPCA reduces dimensionality, removing redundant information and saving 2.91 s in computation while slightly improving accuracy. Additive attention in LSTM emphasizes critical information, enhancing learning of nonlinear relationships and further improving prediction. Comparative experiments demonstrate the model’s reliability. The method achieves RMSE = 0.911, MAE = 0.682, MBE = −0.025, MAPE = 1.147%, R2 = 0.968, with a runtime of 2.023 s, confirming high accuracy and stability. This study provides a theoretical and technical foundation for real-time load prediction of pepper harvesters. Full article
Show Figures

Figure 1

23 pages, 2037 KB  
Article
Sustainable Water Allocation in Karst Regions: A Multi-Objective Framework Integrating Ecological Flow and Intelligent Demand Forecasting
by Yunfa Gao, Ming Zhong, Jie Xu and Guang Yang
Sustainability 2026, 18(8), 4108; https://doi.org/10.3390/su18084108 (registering DOI) - 21 Apr 2026
Abstract
In ecologically fragile karst regions, surface water leakage and spatial mismatches between supply and demand exacerbate water scarcity and ecosystem degradation. In this context, sustainable water resource allocation is of great significance for achieving the United Nations Sustainable Development Goals (SDGs). This study [...] Read more.
In ecologically fragile karst regions, surface water leakage and spatial mismatches between supply and demand exacerbate water scarcity and ecosystem degradation. In this context, sustainable water resource allocation is of great significance for achieving the United Nations Sustainable Development Goals (SDGs). This study proposes a Dual-stage Prediction and Optimization Coupled Allocation Model (DPOCAM), which integrates an LSTM–Transformer-based intelligent water demand forecasting model with the NSGA-III multi-objective optimization algorithm. The forecasting model was trained on data from 2001 to 2020 and tested on data from 2021 to 2024, achieving a mean absolute percentage error of 2.89%. The model incorporates ecological water demand as an independent optimization objective, quantified using the Tennant method, aiming to coordinate the relationship between domestic and productive water use with aquatic ecosystem protection. Applied to Sinan County, a typical karst area in Guizhou Province, China, the model projects sectoral water demands for 2035 and conducts water resource allocation based on water network planning. Results show that under the current water network, the comprehensive water shortage rate reaches 17.7%, with ecological deficit accounting for 10.1%, posing dual threats to human water security and ecosystem integrity. Following the planned construction of a water network centered on the Huatanzi Reservoir, the overall shortage rate drops to 0.6%, and the ecological deficit declines to 4.6%, demonstrating significant improvements in both water supply reliability and ecological flow guarantee. The water network construction plays a positive role in reducing water shortage rates and enhancing ecological flow protection, providing a scientific basis and practical reference for sustainable water resource management in karst regions. Full article
Show Figures

Figure 1

25 pages, 1521 KB  
Article
Comparative Evaluation of Deep-Learning and SARIMA Models for Short-Term Residential PV Power Forecasting
by Kalsoom Bano, Vishnu Suresh, Francesco Montana and Przemyslaw Janik
Energies 2026, 19(8), 1991; https://doi.org/10.3390/en19081991 (registering DOI) - 20 Apr 2026
Abstract
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power [...] Read more.
Accurate photovoltaic (PV) power forecasting is essential for the efficient operation of residential energy systems and microgrids, as reliable short-term predictions enable improved energy scheduling, demand management, and operational planning in distributed energy environments. In this study, one-hour-ahead forecasting of residential PV power generation is investigated using real-world data collected from multiple households within an Irish energy community. Several deep-learning architectures, including long short-term memory (LSTM), gated recurrent unit (GRU), convolutional neural networks (CNN), CNN–LSTM hybrid networks, and attention-based LSTM models, are evaluated and compared with a seasonal autoregressive integrated moving average (SARIMA) statistical model. A sliding-window approach is employed to transform the PV time series into a supervised learning problem. To ensure statistical robustness, deep-learning models are evaluated using a multi-run framework, and results are reported as mean ± standard deviation based on MAE, RMSE, MAPE, and R2 metrics across multiple households. The results indicate that deep-learning models achieve consistently strong forecasting performance, with GRU frequently providing the most reliable predictions across several households. For instance, in House 5, GRU achieved an RMSE of 142.02 ± 1.87 W and an R2 of 0.694 ± 0.008, while in Houses 11 and 13 it attained R2 values of 0.837 ± 0.002 and 0.835 0.08, respectively. However, performance varied across households, reflecting the influence of data variability and generation patterns on model effectiveness. In comparison, the SARIMA model demonstrated competitive performance and, in certain cases, outperformed deep-learning models. For example, in House 4, it achieved the lowest RMSE of 90.68 W and the highest R2 of 0.709. Overall, these findings highlight that while deep-learning models offer greater adaptability and stability, statistical models remain effective for more regular PV generation patterns. Consequently, the study emphasizes the importance of evaluating forecasting models under realistic household-level conditions and demonstrates that both deep-learning and statistical approaches can provide short-term PV forecasting. Full article
38 pages, 4167 KB  
Article
Sustainable Operational Decision-Making for Thermal Power Enterprises’ Carbon Assets Oriented Toward Medium- and Long-Term Risk Exposure
by Ying Kuai, Yue Liu, Wu Wan, Boyan Zou and Yao Qin
Sustainability 2026, 18(8), 4094; https://doi.org/10.3390/su18084094 - 20 Apr 2026
Abstract
Against the background of deepening “dual carbon” goals and the continuously tightening policies of the national carbon market, the carbon asset risks faced by thermal power enterprises have shifted from short-term compliance cost fluctuations to medium- and long-term systemic risks. Managing these risks [...] Read more.
Against the background of deepening “dual carbon” goals and the continuously tightening policies of the national carbon market, the carbon asset risks faced by thermal power enterprises have shifted from short-term compliance cost fluctuations to medium- and long-term systemic risks. Managing these risks effectively is essential for ensuring the financial viability of thermal power operations during the low-carbon transition, thereby supporting the long-term sustainability of the energy sector. This study constructs a risk management framework for carbon assets in thermal power enterprises based on the LSTM model and option portfolios. First, the multi-dimensional characteristics of medium- and long-term carbon asset risks are systematically identified at the policy, market, and enterprise levels. Second, a dual-layer LSTM model with Dropout regularization is employed to simulate medium- and long-term carbon prices. The prediction results indicate a moderate upward trend in future carbon prices, with the fluctuation range gradually narrowing. On this basis, a combined hedging strategy of “core call options + auxiliary put options” is designed, capping the maximum procurement cost at 72.63 CNY/ton and covering over 90% of the risk of carbon price increases. Monte Carlo simulations and rolling window backtesting, conducted using operational data from a thermal power enterprise to validate the framework, verify the effectiveness and robustness of the strategy. The study shows that, through the integration of accurate LSTM predictions and proactive option hedging, thermal power enterprises can transform their carbon asset management from passive compliance to active value creation, thereby enhancing their operational sustainability and resilience during the energy transition. Full article
20 pages, 1480 KB  
Article
DAGH-Net: A Density-Adaptive Gated Hybrid Knowledge Graph Network for Pedestrian Trajectory Prediction
by Feiyang Xu, Bin Zhang and Yaqing Liu
Electronics 2026, 15(8), 1738; https://doi.org/10.3390/electronics15081738 - 20 Apr 2026
Abstract
Pedestrian trajectory prediction is a fundamental task in autonomous driving and mobile robotics, where accurate forecasting requires modeling of both social interactions and scene-related constraints. However, existing methods typically rely on a fixed interaction modeling strategy, which may be insufficient under heterogeneous crowd [...] Read more.
Pedestrian trajectory prediction is a fundamental task in autonomous driving and mobile robotics, where accurate forecasting requires modeling of both social interactions and scene-related constraints. However, existing methods typically rely on a fixed interaction modeling strategy, which may be insufficient under heterogeneous crowd densities. To address this limitation, we propose DAGH-Net, a density-adaptive gated hybrid network for pedestrian trajectory prediction. Built upon an SR-LSTM (State Refinement for LSTM) backbone, the proposed framework integrates two complementary reasoning pathways: a data-driven social interaction branch and a hybrid knowledge graph branch that encodes structured relational priors among pedestrians, obstacles, and walkable regions. A local-density-conditioned gating mechanism is further introduced to adaptively fuse these features according to the surrounding crowd condition of each pedestrian. This design helps suppress redundant interaction cues in sparse settings while strengthening socially compliant and scene-consistent reasoning in dense or conflict-prone environments. Experimental results on the ETH (Eidgenössische Technische Hochschule Zürich) and UCY (University of Cyprus) benchmarks, evaluated using Mean Average Displacement (MAD) and Final Average Displacement (FAD), show that DAGH-Net improves the average MAD and FAD by 1.6% and 4.2%, respectively, compared with SR-LSTM. Ablation studies further support the complementary contributions of the hybrid knowledge graph and the density-adaptive gating mechanism. We also discuss the limitations of the current density formulation and benchmark scale, which suggest several directions for future improvement. Full article
Show Figures

Figure 1

35 pages, 2051 KB  
Article
Leakage-Controlled Horizon-Specific Model Selection for Daily Equity Forecasting: An Automated Multi-Model Pipeline
by Francisco Augusto Nuñez Perez, Francisco Javier Aguilar Mosqueda, Adrian Ramos Cuevas, Jaqueline Muñoz Beltran and Jose Cruz Nuñez Perez
Forecasting 2026, 8(2), 34; https://doi.org/10.3390/forecast8020034 - 20 Apr 2026
Abstract
Short-horizon equity forecasting remains challenging because daily prices are noisy, heavy-tailed, and subject to structural breaks and regime shifts. We develop a fully automated, reproducible, and leakage-controlled multi-model pipeline for daily forecasting with horizon-specific configuration selection. The task is formulated as predicting cumulative [...] Read more.
Short-horizon equity forecasting remains challenging because daily prices are noisy, heavy-tailed, and subject to structural breaks and regime shifts. We develop a fully automated, reproducible, and leakage-controlled multi-model pipeline for daily forecasting with horizon-specific configuration selection. The task is formulated as predicting cumulative H-day log-returns from OHLCV-derived information and converting them to implied price forecasts. All model families share a homologated design: causal feature construction, a strictly chronological split with an explicit purging rule to prevent label-window overlap for multi-day targets, training-only robustification (winsorization and adaptive clipping), and a unified metric suite computed consistently in return and price spaces. The framework benchmarks transparent baselines (zero- and mean-return), gradient-boosted trees (XGBoost), and deep temporal models (LSTM and CNN/TCN). Lookback length L{60,180,500} is selected via an internal walk-forward procedure on the pre-evaluation block, and final performance is reported on an external hold-out segment (last 15% of instances). Experiments on daily data for MT, DELL, and the S&P 500 index (through 3 February 2026) show that all families achieve similarly strong price-level fit at H=1, largely driven by persistence in the price process, while separation across families becomes more visible at H=5. However, predictive performance in return space remains weak, with R2 close to zero or negative, and Diebold–Mariano tests do not provide consistent evidence of statistical superiority over naive benchmarks. Under an operational rule that minimizes hold-out RMSE on the price scale, selected models are asset- and horizon-dependent, supporting horizon-wise selection rather than a single global architecture. Overall, the primary contribution lies in the proposed leakage-controlled evaluation and benchmarking framework rather than in demonstrating consistent predictive gains in financial time series forecasting. Full article
31 pages, 2109 KB  
Article
Evaluating Neural Networks Architectures for Competency Prediction from Process Data Using PISA Computer-Based Mathematics Assessment
by Huan Kuang
J. Intell. 2026, 14(4), 70; https://doi.org/10.3390/jintelligence14040070 - 20 Apr 2026
Abstract
Computer-based assessments generate rich process data that captures examinees’ interactions with test items. Using process data from the U.S. PISA 2012 computer-based mathematics assessment sample, this study applied recurrent neural networks to predict item-level correctness and assessment-level latent proficiency. The analysis also examines [...] Read more.
Computer-based assessments generate rich process data that captures examinees’ interactions with test items. Using process data from the U.S. PISA 2012 computer-based mathematics assessment sample, this study applied recurrent neural networks to predict item-level correctness and assessment-level latent proficiency. The analysis also examines the impact of expert-engineered features, levels of architectural complexity, action variability, and score variability on model performance. At the item level, most models achieved AUC values around 0.80, indicating good predictive performance. Moderate correlations were observed between latent proficiency from 30 items and predictions based on process data from a subset of items (n = 10). For item-level models, adding expert-engineered features reduces training time and may improve predictive performance with low action variability. For the assessment-level models, adding expert-engineered features improved performance. Model complexity, including model type (i.e., standard RNN, GRU, and LSTM), number of nodes, and number of layers, had little effect on accuracy and efficiency. Moreover, items with greater action variability were associated with better model performance. The findings suggest that simple neural network architectures are sufficient for modeling process data with limited action variability and that combining action sequences with expert-engineered features improves accuracy, efficiency, and interpretability. Full article
22 pages, 2828 KB  
Article
An Adaptive Traffic Signal Control Framework Integrating Regime-Aware LSTM Forecasting and Signal Optimization Under Socio-Temporal Demand Shifts
by Sara Atef and Ahmed Karam
Appl. Syst. Innov. 2026, 9(4), 81; https://doi.org/10.3390/asi9040081 - 20 Apr 2026
Abstract
Recurring socio-temporal events, such as Ramadan in Middle Eastern cities, introduce pronounced non-stationarity in urban traffic demand. During these periods, daytime traffic volumes typically decline, while congestion becomes more severe in the evening around the Iftar (fast-breaking) period and persists into late-night hours, [...] Read more.
Recurring socio-temporal events, such as Ramadan in Middle Eastern cities, introduce pronounced non-stationarity in urban traffic demand. During these periods, daytime traffic volumes typically decline, while congestion becomes more severe in the evening around the Iftar (fast-breaking) period and persists into late-night hours, making conventional fixed-time signal plans less effective. An additional challenge is that demand is not only time-varying, but also unevenly distributed across competing movements: attempts to prioritize high-volume phases can inadvertently cause excessive delays—or even starvation—on lower-demand approaches. To address these issues, this study presents an adaptive, regime-aware traffic signal control framework that combines predictive modeling with constrained optimization. Short-term phase-level delays are forecast using Long Short-Term Memory (LSTM) models, and a Model Predictive Control (MPC) scheme then determines the green time allocation at each control cycle through a receding-horizon strategy. The optimization explicitly represents phase interactions by including constraints that prevent excessive delay in competing movements, thereby yielding a balanced and operationally realistic control policy. The approach is validated with one-minute-resolution TomTom delay data from a signalized intersection in Jeddah, Saudi Arabia, covering both Normal and Ramadan conditions. The LSTM models show stable predictive performance, achieving root mean square errors (RMSEs) of 19.8 s under Normal conditions and 17.1 s during Ramadan. In general, the results show that the proposed framework cuts total intersection delay by about 0.3% to 2.8% compared to standard control strategies. Even though these total-delay improvements are small, they come with big drops in delay for lower-demand phases (about 12–20%) and keep the delay increases for higher-demand phases under control. This shows that the method makes the whole process more efficient by fairly spreading out the delay instead of just making one phase better on its own. The results show that combining forecasting with constrained optimization is a strong and useful way to handle changing traffic demand. This is especially true during times of high demand when flexibility, stability, and fairness across movements are all important. Full article
Show Figures

Figure 1

19 pages, 3398 KB  
Article
A Hybrid TCN-Attention-BiLSTM Framework for AIS-Based Nearshore Vessel Speed Prediction and Risk Warning
by Xin Liu, Zhaona Chen, Yu Cao and Dan Zhang
Appl. Sci. 2026, 16(8), 3978; https://doi.org/10.3390/app16083978 - 19 Apr 2026
Viewed by 116
Abstract
Accurate vessel speed prediction is essential for maritime traffic supervision, navigational safety, and intelligent coastal management. However, due to the nonlinear, time-varying, and context-dependent characteristics of vessel motion in nearshore waters, conventional single-model approaches often fail to provide sufficiently accurate forecasts. To address [...] Read more.
Accurate vessel speed prediction is essential for maritime traffic supervision, navigational safety, and intelligent coastal management. However, due to the nonlinear, time-varying, and context-dependent characteristics of vessel motion in nearshore waters, conventional single-model approaches often fail to provide sufficiently accurate forecasts. To address this issue, this study proposes a hybrid deep learning framework for Automatic Identification System (AIS)-based nearshore vessel speed prediction and risk warning, integrating a temporal convolutional network (TCN), an attention mechanism, and a bidirectional long short-term memory network (BiLSTM) into a unified architecture. The core novelty of this framework is its task-oriented sequential design, in which TCN extracts local temporal patterns and multi-scale sequence features from historical AIS observations, the attention mechanism adaptively emphasizes informative representations, and BiLSTM models bidirectional contextual dependencies in vessel motion sequences; on this basis, a speed-risk warning process is constructed by combining the predicted speed with electronic-fence threshold constraints. Experiments conducted on real AIS data from coastal waters show that the proposed method obtains lower mean absolute error (MAE), mean squared error (MSE), and root mean square error (RMSE) as well as a higher coefficient of determination (R2) than several benchmark models. The results illustrate that the proposed framework effectively improves vessel speed prediction accuracy within the studied coastal area and provides practical support for proactive maritime supervision and nearshore safety management. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

31 pages, 1694 KB  
Article
Optimized CNN–LSTM Modeling for Crisis Event Detection in Noisy Social Media Streams
by Mudasir Ahmad Wani
Mathematics 2026, 14(8), 1369; https://doi.org/10.3390/math14081369 - 19 Apr 2026
Viewed by 59
Abstract
Event detection is crucial for disaster response, public safety, and trend analysis, enabling real-time identification of critical events. Social media platforms provide a vast content source, offering timely and diverse event coverage compared to traditional news reports. However, challenges arise due to the [...] Read more.
Event detection is crucial for disaster response, public safety, and trend analysis, enabling real-time identification of critical events. Social media platforms provide a vast content source, offering timely and diverse event coverage compared to traditional news reports. However, challenges arise due to the informal and noisy nature of the text, along with the limited availability of ground truth data for training models. This study introduces SOCIAL (Social Media Event Classification using Integrated Artificial Learning and Natural Language Processing), a mathematically grounded framework for real-time social media event detection. SOCIAL integrates a formal representation of social media text with a customized CNN–LSTM architecture, combining convolutional operations for local feature extraction with sequential modeling to capture temporal dependencies, thereby enhancing classification accuracy. Generative AI is employed to create synthetic event-related samples, addressing data scarcity and ensuring a balanced dataset, while the design incorporates quantitative principles to guide embedding selection and model optimization. This study systematically evaluates six experimental configurations with TF-IDF and Word2Vec embeddings. The TF-IDF-based CNN–LSTM model achieved top performance with 98.59% accuracy, 98.13% precision, 99.06% recall, and 0.9719 MCC. Additionally, the F0.5, F1, and F2 scores were 98.31%, 98.59%, and 98.87%, respectively, confirming the model’s strong predictive capabilities. TF-IDF integration enhanced event-specific term recognition, reducing misclassifications and improving reliability. These results demonstrate that SOCIAL is not only a fast, accurate, and scalable tool for crisis event detection, but also a formally principled framework for modeling and analyzing social media signals. Full article
(This article belongs to the Special Issue Deep Representation Learning for Social Network Analysis)
25 pages, 3125 KB  
Article
Machine Learning-Based Optimization for Predicting Physical Properties of Mound–Shoal Complexes
by Peiran Hao, Gongyang Chen, Yi Ning, Chuan He and Lijun Wan
Processes 2026, 14(8), 1299; https://doi.org/10.3390/pr14081299 - 18 Apr 2026
Viewed by 181
Abstract
Carbonate mound–shoal complexes, despite their complex pore structures and pronounced heterogeneity, represent one of the most productive reservoir units within carbonate formations. Accurately predicting key physical properties—such as porosity, permeability, and flow zone index—from well log data remains a significant challenge for conventional [...] Read more.
Carbonate mound–shoal complexes, despite their complex pore structures and pronounced heterogeneity, represent one of the most productive reservoir units within carbonate formations. Accurately predicting key physical properties—such as porosity, permeability, and flow zone index—from well log data remains a significant challenge for conventional empirical methods. This study investigates the application of machine learning algorithms for optimizing the prediction of reservoir properties in hill-and-plain carbonate bodies. Six machine learning approaches—Support Vector Machines (SVM), Backpropagation Neural Networks (BPNN), Long Short-Term Memory Networks (LSTM), K-Nearest Neighbors (KNN), Random Forests (RF), and Gaussian Process Regression (GPR)—are systematically evaluated and compared. The analysis employed flow zone indices, geological data, and well log curves to classify porosity–permeability types. Seven logging parameters were used as input features: spectral gamma ray (SGR), uranium-free gamma ray (CGR), photoelectric absorption cross-section index (PE), bulk density (RHOB), acoustic travel time (DT), neutron porosity (NPHI), and true resistivity (RT). These features were paired with measured physical property values to train and validate the predictive models. Results demonstrate distinct algorithmic advantages for specific properties. The RF model achieved superior performance in permeability prediction, yielding an R2 of 0.6824, whereas the GPR model provided the highest accuracy for porosity estimation, with an R2 of 0.7342 and an Accuracy Index (ACI) of 0.9699. Despite these improvements, machine learning models still face limitations in accurately characterizing low-permeability zones within highly heterogeneous hill–terrace reservoirs. To address this challenge, the study integrates geological prior knowledge into the machine learning framework and applies cross-validation techniques to optimize model parameters, thereby providing a practical and robust approach for detailed assessment of mound–hoal carbonate reservoirs. Full article
(This article belongs to the Topic Petroleum and Gas Engineering, 2nd edition)
Show Figures

Figure 1

21 pages, 2487 KB  
Article
Hybrid Conv1D–LSTM Modelling of Short-Term Reservoir Water-Level Dynamics for Scenario-Based Operational Analysis
by Jelena Marković Branković, Milica Marković and Bojan Branković
Water 2026, 18(8), 963; https://doi.org/10.3390/w18080963 (registering DOI) - 18 Apr 2026
Viewed by 147
Abstract
Accurate representation of short-term reservoir water-level dynamics is essential for operational analysis and scenario-based assessment under prescribed inflow–outflow conditions. In many practical applications, physically based modelling is limited by incomplete process knowledge, unavailable boundary conditions, or insufficient temporal resolution of input data. This [...] Read more.
Accurate representation of short-term reservoir water-level dynamics is essential for operational analysis and scenario-based assessment under prescribed inflow–outflow conditions. In many practical applications, physically based modelling is limited by incomplete process knowledge, unavailable boundary conditions, or insufficient temporal resolution of input data. This study presents a data-driven framework for hourly conditional simulation of reservoir water level based on a hybrid Conv1D–LSTM architecture. The model learns nonlinear relationships among hydraulic forcing, operational control, and system state from historical observations, and is evaluated in a recursive multi-step simulation (rollout) mode to reflect its intended use and capture error accumulation over time. A systematic analysis of input sequence length and activation function is performed to identify a robust model configuration. On the test set, the selected configuration (L = 24, GELU) achieved RMSE = 0.1057 m, MAE = 0.0881 m, and R2 = 0.972 in rollout evaluation. The proposed framework is designed for scenario-based simulation rather than one-step deterministic forecasting, enabling rapid operational screening of alternative inflow–outflow regimes. Unlike many previous studies that emphasize one-step predictive accuracy, this work explicitly assesses model stability in recursive multi-step simulation, which is more relevant for reservoir scenario analysis. Full article
Show Figures

Figure 1

Back to TopTop