Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,812)

Search Parameters:
Keywords = time series forecasting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 979 KB  
Article
Time Series Evidence on Artificial Intelligence and Green Transformation: The Impact of AI Policy on Corporate Carbon Performance
by Wei Wen, Kangan Jiang and Xiaojing Shao
Mathematics 2026, 14(9), 1489; https://doi.org/10.3390/math14091489 - 28 Apr 2026
Abstract
Artificial intelligence development offers new solutions for enhancing corporate carbon performance and is crucial for promoting sustainable business practices. This study investigates the dynamic impact of artificial intelligence (AI) policy on corporate carbon performance using time series panel data of Chinese A-share listed [...] Read more.
Artificial intelligence development offers new solutions for enhancing corporate carbon performance and is crucial for promoting sustainable business practices. This study investigates the dynamic impact of artificial intelligence (AI) policy on corporate carbon performance using time series panel data of Chinese A-share listed companies from 2010 to 2024. Leveraging the staggered establishment of the National New Generation Artificial Intelligence Innovation Development Pilot Zones as a quasi-natural experiment, we develop a multi-period difference-in-differences framework with time-varying treatment. Our time series-based identification strategy addresses serial correlation and time-varying confounding factors through robust clustering and event study specifications. The findings reveal that AI policy significantly improves corporate carbon performance, a conclusion that remains robust after rigorous endogeneity tests, placebo checks, and counterfactual analyses. Using dynamic panel models, this study traces the temporal evolution of policy effects and demonstrates that AI exerts indirect effects through three time-lagged pathways: micro-level technological diffusion, future industry development, and the progressive accumulation of digital infrastructure and computing resources. Heterogeneity analysis reveals differentiated impacts across micro- and macro-levels, providing granular insights for forecasting heterogeneous treatment effects. By integrating panel time series econometrics with causal inference, this study contributes to the literature on corporate carbon performance while expanding analytical frameworks for understanding AI’s enabling effects. The findings offer policy insights and empirical benchmarks for forecasting green transition trajectories, with direct implications for green finance and sustainable economic development. Full article
(This article belongs to the Special Issue Time Series Forecasting for Green Finance and Sustainable Economics)
47 pages, 1732 KB  
Review
Multi-Temporal InSAR and Machine Learning for Geohazard Monitoring: A Systematic Review with Emphasis on Noise Mitigation and Model Transferability
by Alex Alonso-Díaz, Miguel Fontes, Ana Cláudia Teixeira, Shimon Wdowinski and Joaquim J. Sousa
Remote Sens. 2026, 18(9), 1356; https://doi.org/10.3390/rs18091356 - 28 Apr 2026
Abstract
Interferometric Synthetic Aperture Radar (InSAR) enables regional monitoring of ground deformation, but operational geohazard analysis remains challenged by atmospheric artefacts, temporal decorrelation, and the need for scalable interpretation of multi-temporal products. A systematic review was conducted through searches in Scopus and Web of [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) enables regional monitoring of ground deformation, but operational geohazard analysis remains challenged by atmospheric artefacts, temporal decorrelation, and the need for scalable interpretation of multi-temporal products. A systematic review was conducted through searches in Scopus and Web of Science, resulting in 135 peer-reviewed scientific articles on the integration of Machine Learning (ML) and Deep Learning (DL) with multi-temporal InSAR (MT-InSAR). The literature is dominated by applications to landslides and land subsidence, with additional studies addressing volcanic unrest and other deformation-related hazards. Persistent Scatterer (PS) and Small-Baseline Subset (SBAS) approaches are frequently used to derive deformation time series, which are then coupled with ML/DL for the detection and mapping of active phenomena and for short-horizon forecasting. Convolutional architectures, such as Convolutional Neural Networks (CNNs), are commonly reported for spatial recognition tasks, while recurrent models like Long Short-Term Memory (LSTM) networks are often applied to time-series prediction. Reported benefits include improved automation and predictive performance, although sensitivity to noise sources remains a challenge. Overall, the evidence supports AI-enabled InSAR workflows for scalable geohazard monitoring, while highlighting the need for standardized benchmarks and systematic transferability assessment. This review provides a roadmap for transitioning from research prototypes to operational early-warning systems. Full article
23 pages, 3999 KB  
Article
ProAdapt: A Meta-Incremental Learning Framework with Spectral-Temporal Representation Learning and Online EWC for Stock Trend Forecasting
by Lele Gao, Yafei Bai, Wenjie Yao, Nan Li, Yilun Wang and Yong Hu
Electronics 2026, 15(9), 1858; https://doi.org/10.3390/electronics15091858 - 28 Apr 2026
Abstract
Stock trend forecasting remains challenging in real financial markets because data distributions evolve over time, and models trained under static settings often degrade during online deployment. Recent studies have introduced incremental and meta-incremental learning into stock forecasting, yet effective sequential adaptation remains constrained [...] Read more.
Stock trend forecasting remains challenging in real financial markets because data distributions evolve over time, and models trained under static settings often degrade during online deployment. Recent studies have introduced incremental and meta-incremental learning into stock forecasting, yet effective sequential adaptation remains constrained by two issues: financial multivariate time series require stronger representation modeling before downstream prediction, and repeated online updates may lead to forgetting and parameter drift. To address these issues, we propose ProAdapt, a bi-level meta-incremental learning framework for stock trend forecasting in non-stationary markets. ProAdapt contains two key components. The first is a Structural Spectral-Temporal Feature Adapter (SSTFA), which enhances financial time series representations by modeling non-uniform temporal importance and selective cross-factor interactions through adaptive soft window temporal encoding, frequency-domain structure modeling, and feature refinement. The second is online Elastic Weight Consolidation (EWC), which is incorporated into the outer-loop optimization to regularize sequential parameter updates and improve the balance between adaptation and stability. We evaluate ProAdapt on the CSI300 and CSI500 benchmarks under an incremental forecasting setting with sequential task updates. Experimental results across multiple backbones show that ProAdapt generally achieves favorable forecasting results relative to the compared baselines, with relatively clearer gains on CSI500. Additional ablation and analysis results further support the effectiveness of SSTFA and online EWC. Overall, the results suggest that combining explicit representation enhancement with stability-aware sequential updating is beneficial for incremental stock forecasting in evolving market environments. Full article
Show Figures

Figure 1

42 pages, 10246 KB  
Article
Enhancing Karst Spring Discharge Simulation Through a Hybrid XGBoost–BiLSTM Machine Learning Framework
by Mohamed Hamdy Eid, Attila Kovács and Péter Szűcs
Water 2026, 18(9), 1038; https://doi.org/10.3390/w18091038 - 27 Apr 2026
Abstract
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms [...] Read more.
Accurate simulation of karst spring discharge is critical for sustainable water resource management, yet it remains a significant challenge due to the inherent complexity, heterogeneity, and non-linearity of karst systems. While machine learning models have been increasingly applied to this problem, standalone algorithms often struggle to simultaneously capture complex temporal dependencies and maintain robust generalization. This study provides a comprehensive comparative assessment of five state-of-the-art machine learning (ML) models for forecasting the daily discharge of the Jósva Spring, located in the World Heritage Aggtelek karst area. The main goal of the study is to determine which modern machine learning approach can most accurately forecast the daily discharge of the Jósva Spring using meteorological data and the discharge of a hydraulically connected upstream spring. This is motivated by the need for a reliable operational prediction tool for complex karst aquifers, the improved water-resource management in a climate-sensitive region, and a lack of comparative studies evaluating multiple ML paradigms on the same karst system. The study also aimed at comparing the predictive performance of five state-of-the-art ML models to identify the most accurate and robust model and to understand the predictability of the karst system by analyzing feature importance, lag effects, and temporal dependencies. Three tree-based ensemble models (Random Forest, XGBoost, and Extra Trees) and two deep learning architectures (a Bidirectional Long Short-Term Memory network, BiLSTM, and a novel Hybrid XGBoost–BiLSTM model) were trained using a five-year (2015–2019) daily dataset comprising rainfall, temperature, and upstream discharge. The modeling framework was designed for synchronous simulation (lead time = 0 days), estimating concurrent downstream discharge using upstream and meteorological measurements from the same time step. A rigorous feature-engineering workflow was implemented based on statistical characterization, correlation analysis, and time-series diagnostics. Models were trained on 80% of the dataset and evaluated on an independent 20% test set. The results demonstrate that the proposed Hybrid XGBoost-BiLSTM model achieved the highest predictive accuracy on the unseen test data (R2 = 0.74, NSE = 0.74, RMSE = 716.35 L/min). While the standalone tree-based models, particularly XGBoost (R2 = 0.66), also exhibited strong and competitive performance, the hybrid architecture provided a consistent and measurable improvement across all evaluation metrics. The hybrid model’s success is attributed to its synergistic design, which leverages the powerful feature extraction and refinement capabilities of XGBoost to provide a more informative input space for the BiLSTM, thereby enhancing its ability to capture complex temporal dependencies while mitigating overfitting. Feature importance analysis confirmed that upstream discharge at a 3-day lag was the most critical predictor, highlighting the system’s hydraulic connectivity. This research provides clear, evidence-based guidance showing that hybrid machine learning architectures, which integrate the strengths of different modeling paradigms, represent the most effective approach for developing robust and reliable operational prediction tools for complex karst aquifers. Full article
Show Figures

Figure 1

22 pages, 11494 KB  
Article
Wind-Radiation Data-Driven Modelling Using Derivative Transform, Deep-LSTM, and Stochastic Tree AI Learning in 2-Layer Meteo-Patterns
by Ladislav Zjavka
Modelling 2026, 7(3), 82; https://doi.org/10.3390/modelling7030082 (registering DOI) - 27 Apr 2026
Abstract
Self-contained local forecasting of wind and solar series can improve operational planning of wind farms and photovoltaic (PV) plant day-cycles in addition to numerical models, which are mostly behind time due to high simulation costs. Unstable electricity production requires balancing the availability of [...] Read more.
Self-contained local forecasting of wind and solar series can improve operational planning of wind farms and photovoltaic (PV) plant day-cycles in addition to numerical models, which are mostly behind time due to high simulation costs. Unstable electricity production requires balancing the availability of renewable energy (RE) with unpredictable user consumption to achieve effective usage. Artificial intelligence (AI) predictive modelling can minimise the intermittent uncertainty in wind and solar resources by trying to eliminate specific problems in RE-detached system reliability and optimal utilisation. The proposed 24 h day-training and prediction scheme comprises the starting detection and the following similarity re-assessment of sampling day-series intervals. Two-point professional weather stations record standard meteorological variables, of which the most relevant are selected as optimal model inputs. Automatic two-layer altitude observation captures key relationships between hill- and lowland-level data, which comply with pattern progress. New biologically inspired differential learning (DfL) is designed and developed to integrate adaptive neurocomputing (evolving node tree components) with customised numerical procedures of operator calculus (OC) based on derivative transforms. DfL enables the representation of uncertain dynamics related to local weather patterns. Angular and frequency data (wind azimuth, temperature, irradiation) are processed together with the amplitudes to solve simple 2-variable partial differential equations (PDEs) in binomial nodes. Differentiated data provide the fruitful information necessary to model upcoming changes in mid-term day horizons. Additional PDE components in periodic form improve the modelling of hidden complex patterns in cycle data. The DfL efficiency was proved in statistical experiments, compared to a variety of elaborated AI techniques, enhanced by selective difference input preprocessing. Successful LSTM-deep and stochastic tree learning shows little inferior model performances, notably in day-ahead estimation of chaotic 24 h wind series, and slightly better approximation of alterative 8 h solar cycles. Free parametric C++ software with the applied archive data is available for additional comparative and reproducible experiments. Full article
(This article belongs to the Section Modelling in Artificial Intelligence)
Show Figures

Figure 1

25 pages, 2258 KB  
Article
Hybrid Clustering for Retail Demand Forecasting: Combining Rule-Based and Machine Learning Methods
by Jung-Hyuk Kim and Nam-Wook Cho
Forecasting 2026, 8(3), 37; https://doi.org/10.3390/forecast8030037 - 27 Apr 2026
Abstract
Retail demand forecasting for fast-moving consumer goods (FMCGs) presents significant challenges due to high product variety, demand intermittency, and uncertainty, which prevent any single model from capturing the diverse demand patterns. To address these challenges, this study proposes a hybrid clustering framework that [...] Read more.
Retail demand forecasting for fast-moving consumer goods (FMCGs) presents significant challenges due to high product variety, demand intermittency, and uncertainty, which prevent any single model from capturing the diverse demand patterns. To address these challenges, this study proposes a hybrid clustering framework that integrates rule-based (Syntetos–Boylan Classification) and machine learning (ML) approaches, combining time-series embeddings with unsupervised learning to segment products by demand structure. Building on this framework, forecasting is conducted through a two-phase methodology: selecting optimal baseline algorithms per cluster (Phase 1), then enhancing them with embedding-based hybrid models (Phase 2). The effectiveness of this approach is demonstrated using a large-scale real-world dataset comprising over 3.8 million weekly sales records from 12,661 products across 691 stores. Results show that the proposed method improves forecasting accuracy by approximately 5–15% compared to conventional models. Furthermore, model performance varies with demand volatility, as different model–embedding combinations perform best under different conditions. Finally, the proposed diagnostic heuristic reduces experimental effort by 25–50%. Comparative analysis reveals that ML-based clustering outperforms rule-based methods under stable demand, whereas rule-based clustering is superior under high demand uncertainty, confirming that no single clustering paradigm is universally optimal. These findings demonstrate the practical value of adaptive hybrid frameworks for FMCGs demand forecasting. Full article
Show Figures

Figure 1

23 pages, 2480 KB  
Article
Forecast-Guided Distributionally Robust Scheduling of Hybrid Energy Storage for Stability Support in Offshore Wind Farms
by Yijuan Xu, Tiandong Zhang and Zixiang Shen
Mathematics 2026, 14(9), 1458; https://doi.org/10.3390/math14091458 - 26 Apr 2026
Viewed by 60
Abstract
High-frequency volatility and extreme tail risks in offshore wind power pose severe challenges to grid stability and economic operation. Conventional storage planning often relies on deterministic profiles or static allocation rules, failing to capture the non-stationary temporal dynamics of marine wind resources. To [...] Read more.
High-frequency volatility and extreme tail risks in offshore wind power pose severe challenges to grid stability and economic operation. Conventional storage planning often relies on deterministic profiles or static allocation rules, failing to capture the non-stationary temporal dynamics of marine wind resources. To bridge this gap, this paper proposes a closed-loop framework that integrates ultra-short-term probabilistic forecasting with dynamic hybrid energy storage optimization. A novel Dual-Channel Residual Network is developed to provide well-calibrated predictive uncertainty quantification, which explicitly drives a Prediction-Guided Dynamic Hybrid Storage Optimization Framework. By dynamically coordinating lithium-ion batteries and liquid air energy storage based on evidential predictive variance, the proposed approach achieves superior synergy between short-term power response and long-duration energy shifting. Case studies on an offshore wind farm validate that the framework significantly reduces the Levelized Cost of Energy and loss-of-load risks while enhancing frequency regulation capabilities compared to state-of-the-art benchmarks. Full article
35 pages, 10652 KB  
Article
Unveiling Long-Memory Dynamics in Turbulent Markets: A Novel Fractional-Order Attention-Based GRU-LSTM Framework with Multifractal Analysis
by Yangxin Wang and Yuxuan Zhang
Fractal Fract. 2026, 10(5), 293; https://doi.org/10.3390/fractalfract10050293 - 26 Apr 2026
Viewed by 62
Abstract
Financial time series in turbulent markets exhibit complex long-memory dynamics and multifractal features that traditional deep learning models fail to capture due to inherent exponential forgetting mechanisms. To address this, we propose Frac-Attn-GL, a novel Fractional-order Spatiotemporal Attention-based GRU-LSTM framework. Grounded in the [...] Read more.
Financial time series in turbulent markets exhibit complex long-memory dynamics and multifractal features that traditional deep learning models fail to capture due to inherent exponential forgetting mechanisms. To address this, we propose Frac-Attn-GL, a novel Fractional-order Spatiotemporal Attention-based GRU-LSTM framework. Grounded in the Fractal Market Hypothesis, the model embeds Grünwald–Letnikov fractional-order operators into a dual-channel architecture (FracLSTM and FracGRU) to characterize long-range memory with rigorous power-law decay priors. Furthermore, an extreme-aware asymmetric loss function is designed to drive a dynamic spatiotemporal routing mechanism, enabling adaptive shifts between long-term macro trends and short-term micro shocks. Empirical tests on major U.S. stock indices reveal three significant findings. First, the Frac-Attn-GL framework substantially reduces prediction errors, achieving up to a 93.1% RMSE reduction on the highly volatile NASDAQ index compared to standard baselines. Second, the adaptively learned fractional-order parameters exhibit a consistent quantitative alignment with the market’s empirical multifractal singularity spectrum, supporting the physical interpretability of the model’s endogenous memory mechanism. Finally, hybrid residual multifractal diagnostics indicate that the framework effectively captures deep long-range correlations, reducing the Hurst exponent of the prediction residuals from ~0.83 to approximately 0.50, a level consistent with the absence of significant long-range dependence. Full article
(This article belongs to the Special Issue Fractal Approaches and Machine Learning in Financial Markets)
31 pages, 492 KB  
Review
Artificial Intelligence for Blood Glucose Level Prediction in Type 1 Diabetes: Methods, Evaluation, and Emerging Advances
by Heydar Khadem, Hoda Nemat, Jackie Elliott and Mohammed Benaissa
Sensors 2026, 26(9), 2675; https://doi.org/10.3390/s26092675 - 25 Apr 2026
Viewed by 528
Abstract
Blood glucose level (BGL) prediction, by providing early warnings regarding unsatisfactory glycaemic control and maximising the amount of time BGL remains in the target range, can contribute to minimising both acute and chronic complications related to diabetes. This paper aims to provide an [...] Read more.
Blood glucose level (BGL) prediction, by providing early warnings regarding unsatisfactory glycaemic control and maximising the amount of time BGL remains in the target range, can contribute to minimising both acute and chronic complications related to diabetes. This paper aims to provide an overview of data-driven approaches for BGL prediction in type 1 diabetes mellitus (T1DM). This review summarises different aspects of developing and evaluating data-driven prediction models, including model strategy, model input, prediction horizon, and prediction performance. It also examines applications of recent artificial intelligence (AI) techniques, including deep learning, transfer learning, ensemble learning, and causal analysis in the management of T1DM. Recent studies indicate that machine learning approaches often outperform classical time-series forecasting models in BGL prediction, particularly when using multivariate inputs. These findings also highlight the potential of advanced AI methods to improve prediction accuracy. Moreover, applying appropriate statistical analyses is essential to enable valid comparisons between different BGL prediction models, especially given the considerable inter-individual variability among people with T1DM. The development of efficient methods for integrating affecting variables into BGL prediction requires further research. Given the promising performance of advanced AI techniques and the rapid growth of AI innovation, continued exploration of cutting-edge AI strategies will be crucial for further improving BGL prediction models. Full article
Show Figures

Figure 1

22 pages, 2892 KB  
Article
STFNet: A Specialized Time-Frequency Domain Feature Extraction Neural Network for Long-Term Wind Power Forecasting
by Tingxiao Ding, Xiaochun Hu, Yan Chen, Rongbin Liu, Jin Su, Rongxing Jiang and Yiming Qin
Energies 2026, 19(9), 2080; https://doi.org/10.3390/en19092080 - 25 Apr 2026
Viewed by 170
Abstract
The rapid expansion of renewable energy has raised the demand for accurate, long-term wind power forecasting. However, wind power series are strongly affected by meteorological factors and exhibit pronounced volatility, making long-term prediction challenging. To model these characteristics more comprehensively, we propose STFNet, [...] Read more.
The rapid expansion of renewable energy has raised the demand for accurate, long-term wind power forecasting. However, wind power series are strongly affected by meteorological factors and exhibit pronounced volatility, making long-term prediction challenging. To model these characteristics more comprehensively, we propose STFNet, a dual-branch neural architecture that integrates time-domain and frequency-domain modeling. STFNet contains two key modules: (1) an MLFE module, which explicitly captures lag effects and non-stationary transitions through parallel multi-scale convolutions and a difference-convolution branch and further enhances multivariate dependency learning via cross-variable interaction modeling, and (2) an FGFE module, which applies DCT to capture long-cycle trends and uses a learnable low-pass filter for noise suppression. Experiments on two real-world wind farm datasets (LY and HG) show that STFNet consistently outperforms strong baselines, achieving average MSE reductions of 15.9–26.6% while maintaining a high computational efficiency. Ablation studies further confirm the effectiveness of each module, indicating the strong practical potential of STFNet for wind farm operation and management. Full article
Show Figures

Figure 1

29 pages, 4442 KB  
Article
An Efficient Data Cleaning Method for Renewable Energy Power Stations Integrating Anomaly Detection and Feature Enhancement
by Zifen Han, Chunxiang Yang, Fuwen Wang, Peipei Yang, Zongyang Liu and Wen Tang
Energies 2026, 19(9), 2075; https://doi.org/10.3390/en19092075 (registering DOI) - 24 Apr 2026
Viewed by 90
Abstract
Improving the prediction accuracy of renewable energy power generation units is an important goal of the “source-storage integration” approach. However, the abundance of anomalous data and indistinct features in renewable energy station data seriously affects the health status prediction of these generator sets. [...] Read more.
Improving the prediction accuracy of renewable energy power generation units is an important goal of the “source-storage integration” approach. However, the abundance of anomalous data and indistinct features in renewable energy station data seriously affects the health status prediction of these generator sets. To effectively enhance the performance of renewable energy generation prediction, this paper proposes an efficient data cleaning method for renewable energy stations based on anomaly detection and feature enhancement. First, anomaly detection is achieved by calculating a baseline power curve and partitioning data, utilizing the Density-Based Spatial Clustering of Applications with Noise (DBSCAN). Subsequently, considering that current models often learn low-frequency features while ignoring high-frequency features when processing time-series data, a data feature enhancement method is proposed. The proposed method integrates high-/low-frequency data decomposition, time–frequency domain conversion, and an improved attention mechanism to effectively enhance the high-frequency features of renewable energy station data, and reduces the RMSE of mainstream forecasting models significantly. Finally, using data from a renewable energy station in a region of China, the effectiveness and superiority of the anomaly detection and feature enhancement methods are analyzed. The results show that for renewable energy generation data, the proposed method reduces the RMSE of LSTM and Transformer models by 15.12%, 16.67% and 16.24%, 18.32% respectively, significantly improving prediction accuracy. Full article
(This article belongs to the Topic Solar and Wind Power and Energy Forecasting, 2nd Edition)
Show Figures

Figure 1

18 pages, 1840 KB  
Article
Spatiotemporal Assessment and Prediction of Land Use and Land Cover Change in Urban Green Spaces Using Landsat Remote Sensing and CA–Markov Modeling
by Ali Reza Sadeghi, Ehsan Javanmardi and Farzaneh Javidi
Sustainability 2026, 18(9), 4259; https://doi.org/10.3390/su18094259 (registering DOI) - 24 Apr 2026
Viewed by 518
Abstract
Urban green spaces are increasingly threatened by rapid urban expansion, making their continuous monitoring and prediction essential for sustainable urban management. This study investigates the spatiotemporal dynamics of urban garden landscapes in Shiraz, Iran, by integrating multi-temporal Landsat imagery, GIS analysis, and CA–Markov [...] Read more.
Urban green spaces are increasingly threatened by rapid urban expansion, making their continuous monitoring and prediction essential for sustainable urban management. This study investigates the spatiotemporal dynamics of urban garden landscapes in Shiraz, Iran, by integrating multi-temporal Landsat imagery, GIS analysis, and CA–Markov modeling. Landsat data from 2003, 2013, and 2023 were processed to derive the Normalized Difference Vegetation Index (NDVI), which was classified into four vegetation-density categories to quantify land-cover transitions. A CA–Markov framework implemented in IDRISI TerrSet (Version 20.0) was then employed to simulate spatial dynamics and predict vegetation changes for 2033. Results reveal a significant expansion of non-vegetated areas from 711.93 ha in 2003 to 976.66 ha in 2023, accompanied by a decline in dense vegetation from 403.68 ha to 382.64 ha. Model projections indicate a further reduction in dense vegetation to 239.35 ha by 2033, suggesting ongoing fragmentation of urban green infrastructure driven by development pressures. By combining time-series remote sensing, GIS-based spatial analysis, and predictive modeling, this study provides an integrative framework for detecting, interpreting, and forecasting urban land-cover change. The findings offer evidence-based insights to support sustainable urban planning, green infrastructure protection, and climate-resilient city management in rapidly growing urban environments. Full article
28 pages, 1065 KB  
Article
Normalising Flow Enhanced GARCH Models: A Two-Stage Framework for Flexible Innovation Modelling in Financial Time Series
by Abdullah Hassan, Farai Mlambo and Wilson Tsakane Mongwe
Risks 2026, 14(5), 100; https://doi.org/10.3390/risks14050100 - 24 Apr 2026
Viewed by 96
Abstract
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of [...] Read more.
We introduce the Normalising Flow GARCH (NF-GARCH), a two-stage hybrid framework that enhances traditional GARCH models by replacing restrictive parametric innovation distributions with learned densities via normalising flows. Our approach preserves the interpretability of standard variance dynamics while addressing the common issue of innovation misspecification. In the first stage, we estimate standard GARCH variants (sGARCH, TGARCH, and gjrGARCH) to extract standardised residuals. In the second stage, a Masked Autoregressive Flow learns the underlying residual distribution, with samples from the flow subsequently driving the GARCH recursion for out-of-sample forecasting. Evaluated on 13 daily financial series (six FX pairs and seven equities), NF-GARCH demonstrates systematic, statistically significant improvements in forecast accuracy for skewed-t baselines. Wilcoxon signed-rank tests confirm superior performance specifically for gjrGARCH-sstd and sGARCH-sstd specifications. While the framework offers enhanced flexibility and generative realism, we observe that computational overhead is increased, and the log-variance specification of eGARCH exhibits instability when paired with flow-based innovations. These results suggest that while NF-GARCH effectively captures empirical tail behaviour in univariate settings, future research should explore conditional flow architectures and multivariate extensions to account for time-varying innovation shapes. For risk management, gains are most relevant where skewed-t baselines are used and where closer residual realism supports scenario analysis; effect sizes remain modest relative to model risk and implementation cost. Full article
(This article belongs to the Special Issue Volatility Modeling in Financial Market)
20 pages, 10477 KB  
Article
Enhancing PM2.5 Forecasting via the Integration of Lidar and Radiosonde Vertical Structures
by Siying Chen, Daoming Li, Weishen Wang, He Chen, Pan Guo, Yurong Jiang, Xian Yang, Yangcheng Ma, Yuhao Jin and Yingjie Shu
Remote Sens. 2026, 18(9), 1301; https://doi.org/10.3390/rs18091301 - 24 Apr 2026
Viewed by 163
Abstract
Accurate forecasting of near-surface PM2.5 concentrations remains challenging due to the complex coupling between atmospheric vertical structure, thermodynamic stability, and pollutant accumulation processes. Most existing surface-based statistical and deep learning approaches struggle to represent the three-dimensional state of the atmosphere, which limits [...] Read more.
Accurate forecasting of near-surface PM2.5 concentrations remains challenging due to the complex coupling between atmospheric vertical structure, thermodynamic stability, and pollutant accumulation processes. Most existing surface-based statistical and deep learning approaches struggle to represent the three-dimensional state of the atmosphere, which limits their robustness under complex meteorological conditions. In this study, we propose a multi-source spatiotemporal learning framework(MST-Net) to enhance PM2.5 forecasting accuracy by integrating vertically resolved atmospheric information from lidar and radiosonde observations. The proposed approach incorporates vertical profile features together with surface measurements to provide complementary information on atmospheric vertical structure and its temporal evolution. Experimental results demonstrate that MST-Net consistently outperforms conventional time-series models across multiple forecast horizons. Notably, at extended lead times (12–24 h), the proposed framework exhibits enhanced stability and slower error growth. For 24 h forecasts, MST-Net reduces RMSE by approximately 13% and MAE by about 19%. These results indicate that leveraging multi-source vertical atmospheric information can effectively improve the reliability of urban air quality forecasting. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

25 pages, 1091 KB  
Article
Time Series Modeling of Dengue Outbreaks Through Singular Spectrum Analysis Incorporating Lunar and Solar Calendars for Improved Forecasting
by Gumgum Darmawan, Bertho Tantular, Defi Yusti Faidah, Sukono, Norizan Mohamed and Astrid Sulistya Azahra
Sustainability 2026, 18(9), 4243; https://doi.org/10.3390/su18094243 (registering DOI) - 24 Apr 2026
Viewed by 109
Abstract
Dengue Hemorrhagic Fever (DHF) is a tropical infectious disease transmitted by the Aedes aegypti mosquito and exhibits seasonal patterns with periodic increases in cases throughout the year. The control of vector-borne diseases such as DHF is very important for strengthening public health resilience [...] Read more.
Dengue Hemorrhagic Fever (DHF) is a tropical infectious disease transmitted by the Aedes aegypti mosquito and exhibits seasonal patterns with periodic increases in cases throughout the year. The control of vector-borne diseases such as DHF is very important for strengthening public health resilience against climate change, in line with the Sustainable Development Goals (SDGs) for Good Health, Well-being, and Climate Action. Therefore, this study was focused on Bogor city, which experiences high rainfall and continues to face an elevated risk of DHF. The objective was to develop a time series forecasting model to predict DHF outbreaks using Singular Spectrum Analysis (SSA). This is a statistical method for identifying patterns in time series data. Lunar and Solar calendars were adopted to capture seasonal patterns and determine the optimal window length for prediction. The results showed that the Lunar calendar more accurately captured local seasonal variation related to DHF risk. Moreover, the SSA model with one component and a window length of 7 achieved the best performance with a Mean Absolute Percentage Error (MAPE) of 0.0757. The forecast accuracy decreased with longer horizons, but the model provided reliable predictions for short-term periods (approximately 1 month, i.e., up to 4 weeks ahead), which were considered useful for planning DHF mitigation. The results emphasized that the combination of SSA with appropriate calendar systems could improve the accuracy of epidemiological predictions and support vector control policymaking in tropical regions. Full article
(This article belongs to the Section Health, Well-Being and Sustainability)
Back to TopTop