Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,383)

Search Parameters:
Keywords = time-series planning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 2963 KB  
Article
Explainable Artificial Intelligence Framework for Predicting Treatment Outcomes in Age-Related Macular Degeneration
by Mini Han Wang
Sensors 2025, 25(22), 6879; https://doi.org/10.3390/s25226879 - 11 Nov 2025
Abstract
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and [...] Read more.
Age-related macular degeneration (AMD) is a leading cause of irreversible blindness, yet current tools for forecasting treatment outcomes remain limited by either the opacity of deep learning or the rigidity of rule-based systems. To address this gap, we propose a hybrid neuro-symbolic and large language model (LLM) framework that combines mechanistic disease knowledge with multimodal ophthalmic data for explainable AMD treatment prognosis. In a pilot cohort of ten surgically managed AMD patients (six men, four women; mean age 67.8 ± 6.3 years), we collected 30 structured clinical documents and 100 paired imaging series (optical coherence tomography, fundus fluorescein angiography, scanning laser ophthalmoscopy, and ocular/superficial B-scan ultrasonography). Texts were semantically annotated and mapped to standardized ontologies, while images underwent rigorous DICOM-based quality control, lesion segmentation, and quantitative biomarker extraction. A domain-specific ophthalmic knowledge graph encoded causal disease and treatment relationships, enabling neuro-symbolic reasoning to constrain and guide neural feature learning. An LLM fine-tuned on ophthalmology literature and electronic health records ingested structured biomarkers and longitudinal clinical narratives through multimodal clinical-profile prompts, producing natural-language risk explanations with explicit evidence citations. On an independent test set, the hybrid model achieved AUROC 0.94 ± 0.03, AUPRC 0.92 ± 0.04, and a Brier score of 0.07, significantly outperforming purely neural and classical Cox regression baselines (p ≤ 0.01). Explainability metrics showed that >85% of predictions were supported by high-confidence knowledge-graph rules, and >90% of generated narratives accurately cited key biomarkers. A detailed case study demonstrated real-time, individualized risk stratification—for example, predicting an >70% probability of requiring three or more anti-VEGF injections within 12 months and a ~45% risk of chronic macular edema if therapy lapsed—with predictions matching the observed clinical course. These results highlight the framework’s ability to integrate multimodal evidence, provide transparent causal reasoning, and support personalized treatment planning. While limited by single-center scope and short-term follow-up, this work establishes a scalable, privacy-aware, and regulator-ready template for explainable, next-generation decision support in AMD management, with potential for expansion to larger, device-diverse cohorts and other complex retinal diseases. Full article
(This article belongs to the Special Issue Sensing Functional Imaging Biomarkers and Artificial Intelligence)
Show Figures

Figure 1

19 pages, 4475 KB  
Article
Joint Planning of Heat and Power Production Using Hybrid Deep Neural Networks
by Jungwoo Ahn, Sangjun Lee, In-Beom Park and Kwanho Kim
Energies 2025, 18(22), 5905; https://doi.org/10.3390/en18225905 - 10 Nov 2025
Abstract
As demand for heat and power continues to grow, production planning of a combined heat and power (CHP) system becomes one of the most crucial optimization problems. Due to the fluctuations in demand and production costs of heat and power, it is necessary [...] Read more.
As demand for heat and power continues to grow, production planning of a combined heat and power (CHP) system becomes one of the most crucial optimization problems. Due to the fluctuations in demand and production costs of heat and power, it is necessary to quickly solve the production planning problem of the contemporary CHP system. In this paper, we propose a Hybrid Time series Informed neural Network (HYTIN) in which, a deep learning-based planner for CHP production planning predicts production levels for heat and power for each time step. Specifically, HYTIN supports inventory-aware decisions by utilizing a long short-term memory network for heat production and a convolutional neural network for power production. To verify the effectiveness of the proposed method, we build ten independent test datasets of 1200 h each with feasible initial states and common limits. Experimentation results demonstrate that HYTIN achieves lower operation cost than the other baseline methods considered in this paper while maintaining quick inference time, suggesting the viability of HYTIN when constructing production plans under dynamic variations in demand in CHP systems. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

31 pages, 989 KB  
Article
The Role of Human Resource Factors in the Success of Research and Development Projects: A Causal Analysis
by Roxana-Mariana Nechita, Cătălina-Monica Alexe and Cătălin-George Alexe
Sustainability 2025, 17(22), 10001; https://doi.org/10.3390/su172210001 - 9 Nov 2025
Viewed by 187
Abstract
The success of a research project, as determined by its perceived impact, is important for its ability to attract, mobilise, and manage funding, which constitutes a key indicator of the sustainability and relevance of the activities undertaken. The success of project teams involved [...] Read more.
The success of a research project, as determined by its perceived impact, is important for its ability to attract, mobilise, and manage funding, which constitutes a key indicator of the sustainability and relevance of the activities undertaken. The success of project teams involved in research and development processes is also significantly influenced by factors associated with human resources and is consolidated over time through their implementation in effective collaboration and management practices. In this context, the proposed study investigates the causal interdependencies and models the cause-and-effect relationships among 28 factors, with a focus on human resource factors that impact the success of research and development projects. The study aims to identify a series of particularities that differentiate research and development projects from other types of projects. The findings contribute to the specialised literature by empirically validating the interdependencies between human resource factors and offering an interesting perspective for managers, helping them to focus their efforts on the variables with the greatest potential to influence performance. Furthermore, the findings also contribute to the identification of a series of particularities that differentiate research and development projects from projects in the industrial or financial-banking sectors, particularities that impact the way activities are planned and managed and the establishment of the criteria used to intelligently direct managers’ efforts. Full article
Show Figures

Figure 1

18 pages, 2408 KB  
Article
A Two-Stage Topology Identification Strategy for Low-Voltage Distribution Grids Based on Contrastive Learning
by Yang Lei, Fan Yang, Yanjun Feng, Wei Hu and Yinzhang Cheng
Energies 2025, 18(22), 5886; https://doi.org/10.3390/en18225886 - 8 Nov 2025
Viewed by 168
Abstract
An accurate topology of low-voltage distribution grids (LVDGs) serves as the foundation for advanced applications such as line loss analysis, fault location, and power supply planning. This paper proposes a two-stage topology identification strategy for LVDGs based on Contrastive Learning. Firstly, the Dynamic [...] Read more.
An accurate topology of low-voltage distribution grids (LVDGs) serves as the foundation for advanced applications such as line loss analysis, fault location, and power supply planning. This paper proposes a two-stage topology identification strategy for LVDGs based on Contrastive Learning. Firstly, the Dynamic Time Warping (DTW) algorithm is utilized to align the time series of measurement data and evaluate their similarity, yielding the DTW similarity coefficient of the sequences. The Prim algorithm is then employed to construct the initial topology framework. Secondly, aiming at the topology information obtained from the initial identification, an Unsupervised Graph Attention Network (Unsup-GAT) model is proposed to aggregate node features, enabling the learning of complex correlation patterns in unsupervised scenarios. Subsequently, a loss function paradigm that incorporates both InfoNCE loss and power imbalance loss is constructed for updating network parameters, thereby realizing the identification and correction of local connection errors in the topology. Finally, case studies are conducted on 7 LVDGs of different node scales in a certain region of China to verify the effectiveness of the proposed two-stage topology identification strategy. Full article
Show Figures

Figure 1

30 pages, 11325 KB  
Article
An Enhanced NSGA-II Algorithm Combining Lévy Flight and Simulated Annealing and Its Application in Electric Winch Trajectory Planning: A Complex Multi-Objective Optimization Study
by Enzhi Quan, Yanjun Liu, Han Gao, Huaqiang You and Gang Xue
Machines 2025, 13(11), 1017; https://doi.org/10.3390/machines13111017 - 3 Nov 2025
Viewed by 249
Abstract
To overcome the limitations of traditional multi-objective evolutionary algorithms—which often become trapped in local optima when addressing complex optimization problems and face challenges in balancing convergence efficiency with population diversity—this study proposes an enhanced NSGA-II algorithm that incorporates Lévy flight and simulated annealing [...] Read more.
To overcome the limitations of traditional multi-objective evolutionary algorithms—which often become trapped in local optima when addressing complex optimization problems and face challenges in balancing convergence efficiency with population diversity—this study proposes an enhanced NSGA-II algorithm that incorporates Lévy flight and simulated annealing strategies. The proposed algorithm enhances global exploration via Lévy flight mutation, improves local search precision through simulated annealing, and dynamically coordinates the search process using adaptive parameter strategies. Experiments conducted on the ZDT and DTLZ test function series demonstrated that the proposed algorithm achieves performance comparable to or better than that of NSGA-II and other benchmark algorithms, as measured by inverted generational distance and hypervolume metrics. It also exhibited superior convergence, distribution uniformity, and robustness. Furthermore, the algorithm was applied to the multi-objective optimization of electric winch trajectories for oil drilling rigs, which employed trajectory planning based on quintic polynomials. The simulation results demonstrated, compared to the pre-optimization baseline data, reductions of 6% in total operation time, 17.99% in energy consumption, and 27.4% in impact severity, thereby validating the method’s effectiveness and applicability in practical engineering scenarios. The comprehensive results demonstrate that the improved algorithm exhibits robust performance and excellent adaptability when addressing complex multi-objective optimization problems. Full article
(This article belongs to the Section Electrical Machines and Drives)
Show Figures

Figure 1

27 pages, 2139 KB  
Article
Generalisation Bounds of Zero-Shot Economic Forecasting Using Time Series Foundation Models
by Jittarin Jetwiriyanon, Teo Susnjak and Surangika Ranathunga
Mach. Learn. Knowl. Extr. 2025, 7(4), 135; https://doi.org/10.3390/make7040135 - 3 Nov 2025
Viewed by 728
Abstract
This study investigates the transfer learning capabilities of Time-Series Foundation Models (TSFMs) under the zero-shot setup, to forecast macroeconomic indicators. New TSFMs are continually emerging, offering significant potential to provide ready-trained and accurate forecasting models that generalise across a wide spectrum of domains. [...] Read more.
This study investigates the transfer learning capabilities of Time-Series Foundation Models (TSFMs) under the zero-shot setup, to forecast macroeconomic indicators. New TSFMs are continually emerging, offering significant potential to provide ready-trained and accurate forecasting models that generalise across a wide spectrum of domains. However, the transferability of their learning to many domains, especially economics, is not well understood. To that end, we study TSFM’s performance profile for economic forecasting, bypassing the need for training bespoke econometric models using extensive training datasets. Our experiments were conducted on a univariate case study dataset, in which we rigorously back-tested three state-of-the-art TSFMs (Chronos, TimeGPT, and Moirai) under data-scarce conditions and structural breaks. Our results demonstrate that appropriately engineered TSFMs can internalise rich economic dynamics, accommodate regime shifts, and deliver well-behaved uncertainty estimates out of the box, while matching and exceeding state-of-the-art multivariate models currently used in this domain. Our findings suggest that, without any fine-tuning and additional multivariate inputs, TSFMs can match or outperform classical models under both stable and volatile economic conditions. However, like all models, they are vulnerable to performance degradation during periods of rapid shocks, though they recover the forecasting accuracy faster than classical models. The findings offer guidance to practitioners on when zero-shot deployments are viable for macroeconomic monitoring and strategic planning. Full article
Show Figures

Graphical abstract

24 pages, 5791 KB  
Article
AI-Driven Prediction of Building Energy Performance and Thermal Resilience During Power Outages: A BIM-Simulation Machine Learning Workflow
by Mohammad H. Mehraban, Shayan Mirzabeigi, Setare Faraji, Sameeraa Soltanian-Zadeh and Samad M. E. Sepasgozar
Buildings 2025, 15(21), 3950; https://doi.org/10.3390/buildings15213950 - 2 Nov 2025
Viewed by 632
Abstract
Power outages during extreme heat events threaten occupant safety by exposing buildings to rapid indoor overheating. However, current building thermal resilience assessments rely mainly on physics-based simulations or IoT sensor data, which are computationally expensive and slow to scale. This study develops an [...] Read more.
Power outages during extreme heat events threaten occupant safety by exposing buildings to rapid indoor overheating. However, current building thermal resilience assessments rely mainly on physics-based simulations or IoT sensor data, which are computationally expensive and slow to scale. This study develops an Artificial Intelligence (AI)-driven workflow that integrates Building Information Modeling (BIM)-based residential models, automated EnergyPlus simulations, and supervised Machine Learning (ML) algorithms to predict indoor thermal trajectories and calculate thermal resilience against power failure events in hot seasons. Four representative U.S. residential building typologies were simulated across fourteen ASHRAE climate zones to generate 16,856 scenarios over 45.8 h of runtime. The resulting dataset spans diverse climates and envelopes and enables systematic AI training for energy performance and resilience assessment. It included both time-series of indoor thermal conditions and static thermal resilience metrics such as Passive Survivability Index (PSI) and Weighted Unmet Thermal Performance (WUMTP). Trained on this dataset, ensemble boosting models, notably XGBoost, achieved near-perfect accuracy with an average R2 of 0.9994 and nMAE of 1.10% across time-series (indoor temperature, humidity, and cooling energy) recorded every 3 min for a 5-day simulation period with 72 h of outage. It also showed strong performance for predicting static resilience metrics, including WUMTP (R2 = 0.9521) and PSI (R2 = 0.9375), and required only 1148 s for training. Feature importance analysis revealed that windows contribute 74.3% of the envelope-related influence on passive thermal response. This study demonstrates that the novelty lies not in the algorithm itself, but in applying the model to resilience context of power outages, to reduce computations from days to seconds. The proposed workflow serves as a scalable and accurate tool not only to support resilience planning, but also to guide retrofit prioritization and inform building codes. Full article
Show Figures

Figure 1

28 pages, 4579 KB  
Article
A Mathematics-Oriented AI Iterative Prediction Framework Combining XGBoost and NARX: Application to the Remaining Useful Life and Availability of UAV BLDC Motors
by Chien-Tai Hsu, Kai-Chao Yao, Ting-Yi Chang, Bo-Kai Hsu, Wen-Jye Shyr, Da-Fang Chou and Cheng-Chang Lai
Mathematics 2025, 13(21), 3460; https://doi.org/10.3390/math13213460 - 30 Oct 2025
Viewed by 662
Abstract
This paper presents a mathematics-focused AI iterative prediction framework that combines Extreme Gradient Boosting (XGBoost) for nonlinear function approximation with nonlinear autoregressive model with exogenous inputs (NARXs) for time-series modeling, applied to analyzing the Remaining Useful Life (RUL) and availability of Unmanned Aerial [...] Read more.
This paper presents a mathematics-focused AI iterative prediction framework that combines Extreme Gradient Boosting (XGBoost) for nonlinear function approximation with nonlinear autoregressive model with exogenous inputs (NARXs) for time-series modeling, applied to analyzing the Remaining Useful Life (RUL) and availability of Unmanned Aerial Vehicle (UAV) Brushless DC (BLDC) motors. The framework integrates nonlinear regression, temporal recursion, and survival analysis into a unified system. The dataset includes five UAV motor types, each recorded for 10 min at 20 Hz, totaling approximately 12,000 records per motor for validation across these five motor types. Using grouped K-fold cross-validation by motor ID, the framework achieved mean absolute error (MAE) of 4.01 h and root mean square error (RMSE) of 4.51 h in RUL prediction. Feature importance and SHapley Additive exPlanation (SHAP) analysis identified temperature, vibration, and HI as key predictors, aligning with degradation mechanisms. For availability assessment, survival metrics showed strong performance, with a C-index of 1.00 indicating perfect risk ranking and a Brier score at 300 s of 0.159 reflecting good calibration. Additionally, Conformalized Quantile Regression (CQR) enhanced interval coverage under diverse operating conditions, providing mathematically guaranteed uncertainty bounds. The results demonstrate that this framework improves both accuracy and interpretability, offering a reliable and adaptable solution for UAV motor prognostics and maintenance planning. Full article
Show Figures

Figure 1

35 pages, 7115 KB  
Article
Age-Based Biomass Carbon Estimation and Soil Carbon Assessment in Rubber Plantations Integrating Geospatial Technologies and IPCC Tier 1–2 Guidelines
by Supet Jirakajohnkool, Sangdao Wongsai, Manatsawee Sanpayao and Noppachai Wongsai
Forests 2025, 16(11), 1652; https://doi.org/10.3390/f16111652 - 30 Oct 2025
Viewed by 377
Abstract
This study presents an integrated framework for spatiotemporal mapping of carbon stocks in rubber plantations in Rayong Province, Eastern Thailand—an area undergoing rapid agricultural transformation and rubber expansion. Unlike most existing assessments that rely on Tier 1 IPCC defaults or coarse plantation age [...] Read more.
This study presents an integrated framework for spatiotemporal mapping of carbon stocks in rubber plantations in Rayong Province, Eastern Thailand—an area undergoing rapid agricultural transformation and rubber expansion. Unlike most existing assessments that rely on Tier 1 IPCC defaults or coarse plantation age classes, our framework combines annual plantation age derived from Landsat time series, age-specific allometric growth models, and Tier 2 soil organic carbon (SOC) accounting. This enables fine-scale, age- and site-sensitive estimation of both tree and soil carbon. Results show that tree biomass dominates the carbon pool, with mean tree carbon stocks of 66.94 ± 13.1% t C ha−1, broadly consistent with national field studies. SOC stocks averaged 45.20 ± 0.043% t C ha−1, but were overwhelmingly inherited from pre-conversion land use (43.7 ± 0.042% t C ha−1). Modeled SOC changes (ΔSOC) were modest, with small gains (2.06 t C ha−1) and localized losses (−9.96 t C ha−1), producing a net mean increase of only 1.44 t C ha−1. These values are substantially lower than field-based estimates (5–15 t C ha−1), reflecting structural limitations of the global empirical ΔSOC model and reliance on generalized default parameters. Uncertainties also arise from allometric assumptions, generalized soil factors, and Landsat resolution constraints in smallholder landscapes. Beyond carbon, ecological trade-offs of rubber expansion—including biodiversity loss, soil fertility decline, and hydrological impacts—must be considered. By integrating methodological innovation with explicit acknowledgment of uncertainties, this framework provides a conservative but policy-relevant basis for carbon accounting, subnational GHG reporting, and sustainable land-use planning in tropical agroecosystems. Full article
Show Figures

Figure 1

26 pages, 1631 KB  
Review
Operational and Supply Chain Growth Trends in Basic Apparel Distribution Centers: A Comprehensive Review
by Luong Nguyen, Oscar Mayet and Salil Desai
Logistics 2025, 9(4), 154; https://doi.org/10.3390/logistics9040154 - 30 Oct 2025
Viewed by 673
Abstract
Background: In a fast-changing sector, apparel distribution centers (DCs) are under increasing pressure to meet labor intensive operational requirements, short delivery windows, and variable demand in the rapidly changing apparel industry. Traditional labor forecasting methods often fail in dynamic environments, leading to inefficiencies, [...] Read more.
Background: In a fast-changing sector, apparel distribution centers (DCs) are under increasing pressure to meet labor intensive operational requirements, short delivery windows, and variable demand in the rapidly changing apparel industry. Traditional labor forecasting methods often fail in dynamic environments, leading to inefficiencies, inadequate staffing, and reduced responsiveness. Methods: This comprehensive review discusses AI-enhanced labor forecasting tools that support flexible workforce planning in apparel DCs using a PRISMA methodology. To provide proactive, data-driven scheduling recommendations, the model combines machine learning algorithms with workforce metrics and real-time operational data. Results: Key performance indicators such as throughput per work hour, skill alignment among employees, and schedule adherence were used to assess performance. Apparel distribution centers can significantly benefit from real-time, adaptive decision-making made possible by AI technologies in contrast to traditional models that depend on static forecasts and human scheduling. These include LSTM for time-series prediction, XGBoost for performance-based staffing, and reinforcement learning for flexible task assignments. Conclusions: The paper demonstrates the potential of AI in workforce planning and provides useful guidance for the digitization of labor management in the clothing distribution industry for dynamic and responsive supply chains. Full article
Show Figures

Figure 1

17 pages, 4959 KB  
Article
A Variational Mode Snake-Optimized Neural Network Prediction Model for Agricultural Land Subsidence Monitoring Based on Temporal InSAR Remote Sensing
by Zhenda Wang, Huimin Huang, Ruoxin Wang, Ming Guo, Longjun Li, Yue Teng and Yuefan Zhang
Processes 2025, 13(11), 3480; https://doi.org/10.3390/pr13113480 - 29 Oct 2025
Viewed by 289
Abstract
Interferometric Synthetic Aperture Radar (InSAR) technology is crucial for large-scale land subsidence analysis in cultivated areas within hilly and mountainous regions. Accurate prediction of this subsidence is of significant importance for agricultural resource management and planning. Addressing the limitations of existing subsidence prediction [...] Read more.
Interferometric Synthetic Aperture Radar (InSAR) technology is crucial for large-scale land subsidence analysis in cultivated areas within hilly and mountainous regions. Accurate prediction of this subsidence is of significant importance for agricultural resource management and planning. Addressing the limitations of existing subsidence prediction methods in terms of accuracy and model selection, this paper proposes a deep neural network prediction model based on Variational Mode Decomposition (VMD) and the Snake Optimizer (SO), termed VMD-SO-CNN-LSTM-MATT. VMD decomposes complex subsidence signals into stable intrinsic components, improving input data quality. The SO algorithm is introduced to globally optimize model parameters, preventing local optima and enhancing prediction accuracy. This model utilizes time–series subsidence data extracted via the SBAS-InSAR technique as input. Initially, the original sequence is decomposed into multiple intrinsic mode functions (IMFs) using VMD. Subsequently, a CNN-LSTM network incorporating a Multi-Head Attention mechanism (MATT) is employed to model and predict each component. Concurrently, the SO algorithm performs global optimization of the model hyperparameters. Experimental results demonstrate that the proposed model significantly outperforms comparative models (traditional Long Short-Term Memory (LSTM) neural network, VMD-CNN-LSTM-MATT, and Sparrow Search Algorithm (SSA)-optimized CNN-LSTM) across key metrics: Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and Mean Absolute Percentage Error (MAPE). Specifically, the reductions achieved are minimum improvements of 29.85% for MAE, 8.42% for RMSE, and 33.69% for MAPE. This model effectively enhances the prediction accuracy of land subsidence in cultivated hilly and mountainous areas, validating its high reliability and practicality for subsidence monitoring and prediction tasks. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

23 pages, 3777 KB  
Article
Estimation of Future Number of Electric Vehicles and Charging Stations: Analysis of Sakarya Province with LSTM, GRU and Multiple Linear Regression Approaches
by Ayşe Tuğba Yapıcı, Nurettin Abut and Ahmet Yıldırım
Appl. Sci. 2025, 15(21), 11462; https://doi.org/10.3390/app152111462 - 27 Oct 2025
Viewed by 218
Abstract
This study estimates the number of electric vehicles (EVs) and charging stations in Sakarya Province, Türkiye, for 2030 using advanced artificial intelligence time series methods and statistical approaches. The novelty of the work lies in the application of hyperparameter-optimized LSTM and GRU models [...] Read more.
This study estimates the number of electric vehicles (EVs) and charging stations in Sakarya Province, Türkiye, for 2030 using advanced artificial intelligence time series methods and statistical approaches. The novelty of the work lies in the application of hyperparameter-optimized LSTM and GRU models alongside Multiple Linear Regression (MLR) to a regional dataset, enabling accurate, data-driven forecasting for regional EV planning. Performance was evaluated using multiple metrics, including R2, MAE, MSE, DTW, RMSE, and MAPE, with the GRU model achieving the highest reliability and lowest errors (R2 = 0.99, MAE = 0.3, MSE = 2.9, DTW = 123.2, RMSE = 3.1, MAPE = 2.8%) under optimized parameters. The predicted EV counts and charging station numbers from GRU informed a neighborhood-level allocation of charging stations using Google Maps API, considering local population ratios. These results demonstrate the practical applicability of deep learning for regional infrastructure planning and provide a replicable framework for similar studies in other provinces. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

25 pages, 1928 KB  
Article
A Methodological Comparison of Forecasting Models Using KZ Decomposition and Walk-Forward Validation
by Khawla Al-Saeedi, Diwei Zhou, Andrew Fish, Katerina Tsakiri and Antonios Marsellos
Mathematics 2025, 13(21), 3410; https://doi.org/10.3390/math13213410 - 26 Oct 2025
Viewed by 284
Abstract
The accurate forecasting of surface air temperature (T2M) is crucial for climate analysis, agricultural planning, and energy management. This study proposes a novel forecasting framework grounded in structured temporal decomposition. Using the Kolmogorov–Zurbenko (KZ) filter, all predictor variables are decomposed into three physically [...] Read more.
The accurate forecasting of surface air temperature (T2M) is crucial for climate analysis, agricultural planning, and energy management. This study proposes a novel forecasting framework grounded in structured temporal decomposition. Using the Kolmogorov–Zurbenko (KZ) filter, all predictor variables are decomposed into three physically interpretable components: long-term, seasonal, and short-term variations, forming an expanded multi-scale feature space. A central innovation of this framework lies in training a single unified model on the decomposed feature set to predict the original target variable, thereby enabling the direct learning of scale-specific driver–response relationships. We present the first comprehensive benchmarking of this architecture, demonstrating that it consistently enhances the performance of both regularized linear models (Ridge and Lasso) and tree-based ensemble methods (Random Forest and XGBoost). Under rigorous walk-forward validation, the framework substantially outperforms conventional, non-decomposed approaches—for example, XGBoost improves the coefficient of determination (R2) from 0.80 to 0.91. Furthermore, temporal decomposition enhances interpretability by enabling Ridge and Lasso models to achieve performance levels comparable to complex ensembles. Despite these promising results, we acknowledge several limitations: the analysis is restricted to a single geographic location and time span, and short-term components remain challenging to predict due to their stochastic nature and the weaker relevance of predictors. Additionally, the framework’s effectiveness may depend on the optimal selection of KZ parameters and the availability of sufficiently long historical datasets for stable walk-forward validation. Future research could extend this approach to multiple geographic regions, longer time series, adaptive KZ tuning, and specialized short-term modeling strategies. Overall, the proposed framework demonstrates that temporal decomposition of predictors offers a powerful inductive bias, establishing a robust and interpretable paradigm for surface air temperature forecasting. Full article
Show Figures

Graphical abstract

27 pages, 1817 KB  
Article
Examination of Long-Term Temperature Change in Türkiye: Comparative Evaluation of an Advanced Quartile-Based Approach and Traditional Trend Detection Methods
by Omer Levend Asikoglu, Harun Alp, Ibrahim Temel and Pegah Kamali
Atmosphere 2025, 16(11), 1225; https://doi.org/10.3390/atmos16111225 - 22 Oct 2025
Viewed by 383
Abstract
The fact that 2023 and subsequently 2024 were the hottest years in history makes it even more important to monitor temperature changes over time. In this study, trends in the mean, maximum, and minimum temperature data of 81 provinces in Türkiye were examined [...] Read more.
The fact that 2023 and subsequently 2024 were the hottest years in history makes it even more important to monitor temperature changes over time. In this study, trends in the mean, maximum, and minimum temperature data of 81 provinces in Türkiye were examined using three traditional methods (Mann–Kendall, Linear Regression Analysis and Sen’s slope), one innovative method (ITA), and the QuarTrend (QT) method proposed in this study, which uses quartiles of the data series. The objectives of this research are (1) to determine and evaluate the long-term temperature trends in Türkiye (1960–2022) and (2) to comparatively evaluate the trend results of the proposed QT method, traditional trend detection methods, and ITA. In the study, a statistically significant (p < 0.05) increasing trend was found in the mean (0.027 °C/year), maximum (0.031 °C/year), and minimum (0.038 °C/year) annual temperatures of Türkiye. While traditional trend tests detected similar trends with ITA and QT for mean temperatures; ITA and QT detected more trends than traditional methods for maximum and minimum temperatures. The results have direct implications for the impacts of climate change in the study region. The results have the potential to support the development of climate-resilient and adaptive policies for effective water resource planning and management to sustain the environment and agricultural productivity in Türkiye. Full article
(This article belongs to the Section Meteorology)
Show Figures

Figure 1

23 pages, 731 KB  
Article
Research on Dynamic Hyperparameter Optimization Algorithm for University Financial Risk Early Warning Based on Multi-Objective Bayesian Optimization
by Yu Chao, Nur Fazidah Elias, Yazrina Yahya and Ruzzakiah Jenal
Forecasting 2025, 7(4), 61; https://doi.org/10.3390/forecast7040061 - 22 Oct 2025
Viewed by 484
Abstract
Financial sustainability in higher education is increasingly fragile due to policy shifts, rising costs, and funding volatility. Legacy early-warning systems based on static thresholds or rules struggle to adapt to these dynamics and often overlook fairness and interpretability—two essentials in public-sector governance. We [...] Read more.
Financial sustainability in higher education is increasingly fragile due to policy shifts, rising costs, and funding volatility. Legacy early-warning systems based on static thresholds or rules struggle to adapt to these dynamics and often overlook fairness and interpretability—two essentials in public-sector governance. We propose a university financial risk early-warning framework that couples a causal-attention Transformer with Multi-Objective Bayesian Optimization (MBO). The optimizer searches a constrained Pareto frontier to jointly improve predictive accuracy (AUC↑), fairness (demographic parity gap, DP_Gap↓), and computational efficiency (time↓). A sparse kernel surrogate (SKO) accelerates convergence in high-dimensional tuning; a dual-head output (risk probability and health score) and SHAP-based attribution enhance transparency and regulatory alignment. On multi-year, multi-institution data, the approach surpasses mainstream baselines in AUC, reduces DP_Gap, and yields expert-consistent explanations. Methodologically, the design aligns with LLM-style time-series forecasting by exploiting causal masking and long-range dependencies while providing governance-oriented explainability. The framework delivers earlier, data-driven signals of financial stress, supporting proactive resource allocation, funding restructuring, and long-term planning in higher education finance. Full article
Show Figures

Figure 1

Back to TopTop