Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,006)

Search Parameters:
Keywords = time forecasting

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1520 KB  
Article
Dynamic Carbon-Aware Scheduling for Electric Vehicle Fleets Using VMD-BSLO-CTL Forecasting and Multi-Objective MPC
by Hongyu Wang, Zhiyu Zhao, Kai Cui, Zixuan Meng, Bin Li, Wei Zhang and Wenwen Li
Energies 2026, 19(2), 456; https://doi.org/10.3390/en19020456 (registering DOI) - 16 Jan 2026
Abstract
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a [...] Read more.
Accurate perception of dynamic carbon intensity is a prerequisite for low-carbon demand-side response. However, traditional grid-average carbon factors lack the spatio-temporal granularity required for real-time regulation. To address this, this paper proposes a “Prediction-Optimization” closed-loop framework for electric vehicle (EV) fleets. First, a hybrid forecasting model (VMD-BSLO-CTL) is constructed. By integrating Variational Mode Decomposition (VMD) with a CNN-Transformer-LSTM network optimized by the Blood-Sucking Leech Optimizer (BSLO), the model effectively captures multi-scale features. Validation on the UK National Grid dataset demonstrates its superior robustness against prediction horizon extension compared to state-of-the-art baselines. Second, a multi-objective Model Predictive Control (MPC) strategy is developed to guide EV charging. Applied to a real-world station-level scenario, the strategy navigates the trade-offs between user economy and grid stability. Simulation results show that the proposed framework simultaneously reduces economic costs by 4.17% and carbon emissions by 8.82%, while lowering the peak-valley difference by 6.46% and load variance by 11.34%. Finally, a cloud-edge collaborative deployment scheme indicates the engineering potential of the proposed approach for next-generation low-carbon energy management. Full article
26 pages, 16624 KB  
Article
Multi-Scale Photovoltaic Power Forecasting with WDT–CRMABIL–Fusion: A Two-Stage Hybrid Deep Learning Framework
by Reza Khodabakhshi Palandi, Loredana Cristaldi and Luca Martiri
Energies 2026, 19(2), 455; https://doi.org/10.3390/en19020455 (registering DOI) - 16 Jan 2026
Abstract
Ultra-short-term photovoltaic (PV) power forecasts are vital for secure grid operation as solar penetration rises. We propose a two-stage hybrid framework, WDT–CRMABIL–Fusion. In Stage 1, we apply a three-level discrete wavelet transform to PV power and key meteorological series (shortwave radiation and panel [...] Read more.
Ultra-short-term photovoltaic (PV) power forecasts are vital for secure grid operation as solar penetration rises. We propose a two-stage hybrid framework, WDT–CRMABIL–Fusion. In Stage 1, we apply a three-level discrete wavelet transform to PV power and key meteorological series (shortwave radiation and panel irradiance). We then forecast the approximation and detail sub-series using specialized component predictors: a 1D-CNN with dual residual multi-head attention (feature-wise and time-wise) together with a BiLSTM. In Stage 2, a compact dense fusion network recombines the component forecasts into the final PV power trajectory. We use 5-minute data from a PV plant in Milan and evaluate 5-, 10-, and 15-minute horizons. The proposed approach outperforms strong baselines (DCC+LSTM, CNN+LSTM, CNN+BiLSTM, CRMABIL direct, and WDT+CRMABIL direct). For the 5-minute horizon, it achieves MAE = 1.60 W and RMSE = 4.21 W with R2 = 0.943 and CORR = 0.973, compared with the best benchmark (MAE = 3.87 W; RMSE = 7.89 W). The gains persist across K-means++ weather clusters (rainy/sunny/cloudy) and across seasons. By combining explicit multi-scale decomposition, attention-based sequence learning, and learned fusion, WDT–CRMABIL–Fusion provides accurate and robust ultra-short-term PV forecasts suitable for storage dispatch and reserve scheduling. Full article
32 pages, 22265 KB  
Article
A Hybrid Ensemble Learning Framework for Accurate Photovoltaic Power Prediction
by Wajid Ali, Farhan Akhtar, Asad Ullah and Woo Young Kim
Energies 2026, 19(2), 453; https://doi.org/10.3390/en19020453 (registering DOI) - 16 Jan 2026
Abstract
Accurate short-term forecasting of solar photovoltaic (PV) power output is essential for efficient grid integration and energy management, especially given the widespread global adoption of PV systems. To address this research gap, the present study introduces a scalable, interpretable ensemble learning model of [...] Read more.
Accurate short-term forecasting of solar photovoltaic (PV) power output is essential for efficient grid integration and energy management, especially given the widespread global adoption of PV systems. To address this research gap, the present study introduces a scalable, interpretable ensemble learning model of PV power prediction with respect to a large PVOD v1.0 dataset, which encompasses more than 270,000 points representing ten PV stations. The proposed methodology involves data preprocessing, feature engineering, and a hybrid ensemble model consisting of Random Forest, XGBoost, and CatBoost. Temporal features, which included hour, day, and month, were created to reflect the diurnal and seasonal characteristics, whereas feature importance analysis identified global irradiance, temperature, and temporal indices as key indicators. The hybrid ensemble model presented has a high predictive power, with an R2 = 0.993, a Mean Absolute Error (MAE) = 0.227 kW, and a Root Mean Squared Error (RMSE) = 0.628 kW when applied to the PVOD v1.0 dataset to predict short-term PV power. These findings were achieved on standardized, multi-station, open access data and thus are not in an entirely rigorous sense comparable to previous studies that may have used other datasets, forecasting horizons, or feature sets. Rather than asserting numerical dominance over other approaches, this paper focuses on the real utility of integrating well-known tree-based ensemble techniques with time-related feature engineering to derive real, interpretable, and computationally efficient PV power prediction models that can be used in smart grid applications. This paper shows that a mixture of conventional ensemble methods and extensive temporal feature engineering is effective in producing consistent accuracy in PV forecasting. The framework can be reproduced and run efficiently, which makes it applicable in the integration of smart grid applications. Full article
(This article belongs to the Special Issue Advanced Control Strategies for Photovoltaic Energy Systems)
41 pages, 1444 KB  
Article
A Physics-Informed Combinatorial Digital Twin for Value-Optimized Production of Petroleum Coke
by Vladimir V. Bukhtoyarov, Alexey A. Gorodov, Natalia A. Shepeta, Ivan S. Nekrasov, Oleg A. Kolenchukov, Svetlana S. Kositsyna and Artem Y. Mikhaylov
Energies 2026, 19(2), 451; https://doi.org/10.3390/en19020451 (registering DOI) - 16 Jan 2026
Abstract
Petroleum coke quality strongly influences refinery economics and downstream energy use, yet real-time control is constrained by slow quality assays and a 24–48 h lag in laboratory results. This study introduces a physics-informed combinatorial digital twin for value-optimized coking, aimed at improving energy [...] Read more.
Petroleum coke quality strongly influences refinery economics and downstream energy use, yet real-time control is constrained by slow quality assays and a 24–48 h lag in laboratory results. This study introduces a physics-informed combinatorial digital twin for value-optimized coking, aimed at improving energy efficiency and environmental performance through adaptive quality forecasting. The approach builds a modular library of 32 candidate equations grouped into eight quality parameters and links them via cross-parameter dependencies. A two-level optimization scheme is applied: a genetic algorithm selects the best model combination, while a secondary loop tunes parameters under a multi-objective fitness function balancing accuracy, interpretability, and computational cost. Validation on five clustered operating regimes (industrial patterns augmented with noise-perturbed synthetic data) shows that optimal model ensembles outperform single best models, achieving typical cluster errors of ~7–13% NMAE. The developed digital twin framework enables accurate prediction of coke quality parameters that are critical for its energy applications, such as volatile matter and sulfur content, which serve as direct proxies for estimating the net calorific value and environmental footprint of coke as a fuel. Full article
(This article belongs to the Special Issue AI-Driven Modeling and Optimization for Industrial Energy Systems)
25 pages, 2079 KB  
Article
Predicting GPU Training Energy Consumption in Data Centers Using Task Metadata via Symbolic Regression
by Xiao Liao, Yiqian Li, Shaofeng Zhang, Xianzheng Wei and Jinlong Hu
Energies 2026, 19(2), 448; https://doi.org/10.3390/en19020448 - 16 Jan 2026
Abstract
With the rapid advancement of artificial intelligence (AI) technology, training deep neural networks has become a core computational task that consumes significant energy in data centers. Researchers often employ various methods to estimate the energy usage of data center clusters or servers to [...] Read more.
With the rapid advancement of artificial intelligence (AI) technology, training deep neural networks has become a core computational task that consumes significant energy in data centers. Researchers often employ various methods to estimate the energy usage of data center clusters or servers to enhance energy management and conservation efforts. However, accurately predicting the energy consumption and carbon footprint of a specific AI task throughout its entire lifecycle before execution remains challenging. In this paper, we explore the energy consumption characteristics of AI model training tasks and propose a simple yet effective method for predicting neural network training energy consumption. This approach leverages training task metadata and applies genetic programming-based symbolic regression to forecast energy consumption prior to executing training tasks, distinguishing it from time series forecasting of data center energy consumption. We have developed an AI training energy consumption environment using the A800 GPU and models from the ResNet{18, 34, 50, 101}, VGG16, MobileNet, ViT, and BERT families to collect data for experimentation and analysis. The experimental analysis of energy consumption reveals that the consumption curve exhibits waveform characteristics resembling square waves, with distinct peaks and valleys. The prediction experiments demonstrate that the proposed method performs well, achieving mean relative errors (MRE) of 2.67% for valley energy, 8.42% for valley duration, 5.16% for peak power, and 3.64% for peak duration. Our findings indicate that, within a specific data center, the energy consumption of AI training tasks follows a predictable pattern. Furthermore, our proposed method enables accurate prediction and calculation of power load before model training begins, without requiring extensive historical energy consumption data. This capability facilitates optimized energy-saving scheduling in data centers in advance, thereby advancing the vision of green AI. Full article
15 pages, 4123 KB  
Article
Cable Temperature Prediction Algorithm Based on the MSST-Net
by Xin Zhou, Yanhao Li, Shiqin Zhao, Xijun Wang, Lifan Chen, Minyang Cheng and Lvwen Huang
Electricity 2026, 7(1), 6; https://doi.org/10.3390/electricity7010006 - 16 Jan 2026
Abstract
To improve the accuracy of cable temperature anomaly prediction and ensure the reliability of urban distribution networks, this paper proposes a multi-scale spatiotemporal model called MSST-Net (MSST-Net) for medium-voltage power cables in underground utility tunnels. The model addresses the multi-scale temporal dynamics and [...] Read more.
To improve the accuracy of cable temperature anomaly prediction and ensure the reliability of urban distribution networks, this paper proposes a multi-scale spatiotemporal model called MSST-Net (MSST-Net) for medium-voltage power cables in underground utility tunnels. The model addresses the multi-scale temporal dynamics and spatial correlations inherent in cable thermal behavior. Based on the monthly periodicity of cable temperature data, we preprocessed monitoring data from the KN1 and KN2 sections (medium-voltage power cable segments) of Guangzhou’s underground utility tunnel from 2023 to 2024, using the Isolation Forest algorithm to remove outliers, applying Min-Max normalization to eliminate dimensional differences, and selecting five key features including current load, voltage, and ambient temperature using Spearman’s correlation coefficient. Subsequently, we designed a multi-scale dilated causal convolutional module (DC-CNN) to capture local features, combined with a spatiotemporal dual-path Transformer to model long-range dependencies, and introduced relative position encoding to enhance temporal perception. The Sparrow Search Algorithm (SSA) was employed for global optimization of hyperparameters. Compared with five other mainstream algorithms, MSST-Net demonstrated higher accuracy in cable temperature prediction for power cables in the KN1 and KN2 sections of Guangzhou’s underground utility tunnel, achieving a coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) of 0.942, 0.442 °C, and 0.596 °C, respectively. Compared to the basic Transformer model, the root mean square error of cable temperature was reduced by 0.425 °C. This model exhibits high accuracy in time series prediction and provides a reference for accurate short- and medium-term temperature forecasting of medium-voltage power cables in urban underground utility tunnels. Full article
Show Figures

Figure 1

40 pages, 1968 KB  
Article
Large Model in Low-Altitude Economy: Applications and Challenges
by Jinpeng Hu, Wei Wang, Yuxiao Liu and Jing Zhang
Big Data Cogn. Comput. 2026, 10(1), 33; https://doi.org/10.3390/bdcc10010033 - 16 Jan 2026
Abstract
The integration of large models and multimodal foundation models into the low-altitude economy is driving a transformative shift, enabling intelligent, autonomous, and efficient operations for low-altitude vehicles (LAVs). This article provides a comprehensive analysis of the role these large models play within the [...] Read more.
The integration of large models and multimodal foundation models into the low-altitude economy is driving a transformative shift, enabling intelligent, autonomous, and efficient operations for low-altitude vehicles (LAVs). This article provides a comprehensive analysis of the role these large models play within the smart integrated lower airspace system (SILAS), focusing on their applications across the four fundamental networks: facility, information, air route, and service. Our analysis yields several key findings, which pave the way for enhancing the application of large models in the low-altitude economy. By leveraging advanced capabilities in perception, reasoning, and interaction, large models are demonstrated to enhance critical functions such as high-precision remote sensing interpretation, robust meteorological forecasting, reliable visual localization, intelligent path planning, and collaborative multi-agent decision-making. Furthermore, we find that the integration of these models with key enabling technologies, including edge computing, sixth-generation (6G) communication networks, and integrated sensing and communication (ISAC), effectively addresses challenges related to real-time processing, resource constraints, and dynamic operational environments. Significant challenges, including sustainable operation under severe resource limitations, data security, network resilience, and system interoperability, are examined alongside potential solutions. Based on our survey, we discuss future research directions, such as the development of specialized low-altitude models, high-efficiency deployment paradigms, advanced multimodal fusion, and the establishment of trustworthy distributed intelligence frameworks. This survey offers a forward-looking perspective on this rapidly evolving field and underscores the pivotal role of large models in unlocking the full potential of the next-generation low-altitude economy. Full article
Show Figures

Figure 1

30 pages, 3291 KB  
Article
AI-Based Demand Forecasting and Load Balancing for Optimising Energy Use in Healthcare Systems: A Real Case Study
by Isha Patel and Iman Rahimi
Systems 2026, 14(1), 94; https://doi.org/10.3390/systems14010094 - 15 Jan 2026
Abstract
This paper addresses the critical need for efficient energy management in healthcare facilities, where fluctuating energy demands pose challenges to both operational reliability and sustainability objectives. Traditional energy management approaches often fall short in healthcare settings, resulting in inefficiencies and increased operational costs. [...] Read more.
This paper addresses the critical need for efficient energy management in healthcare facilities, where fluctuating energy demands pose challenges to both operational reliability and sustainability objectives. Traditional energy management approaches often fall short in healthcare settings, resulting in inefficiencies and increased operational costs. To address this gap, the paper explores AI-driven methods for demand forecasting and load balancing and proposes an integrated framework combining Long Short-Term Memory (LSTM) networks, a genetic algorithm (GA), and SHAP (Shapley Additive Explanations), specifically tailored for healthcare energy management. While LSTM has been widely applied in time-series forecasting, its use for healthcare energy demand prediction remains relatively underexplored. In this study, LSTM is shown to significantly outperform conventional forecasting models, including ARIMA and Prophet, in capturing complex and non-linear demand patterns. Experimental results demonstrate that the LSTM model achieved a Mean Absolute Error (MAE) of 21.69, a Root Mean Square Error (RMSE) of 29.96, and an R2 of approximately 0.98, compared to Prophet (MAE: 59.78, RMSE: 81.22, R2 ≈ 0.86) and ARIMA (MAE: 87.73, RMSE: 125.22, R2 ≈ 0.66), confirming its superior predictive performance. The genetic algorithm is employed both to support forecasting optimisation and to enhance load balancing strategies, enabling adaptive energy allocation under dynamic operating conditions. Furthermore, SHAP analysis is used to provide interpretable, within-model insights into feature contributions, improving transparency and trust in AI-driven energy decision-making. Overall, the proposed LSTM–GA–SHAP framework improves forecasting accuracy, supports efficient energy utilisation, and contributes to sustainability in healthcare environments. Future work will explore real-time deployment and further integration with reinforcement learning to enable continuous optimisation. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

30 pages, 5097 KB  
Article
The Impact of Electric Charging Unit Conversion on the Performance of Fuel Stations Located in Urban Areas: A Sustainable Approach
by Merve Yetimoğlu, Mustafa Karaşahin and Mehmet Sinan Yıldırım
Sustainability 2026, 18(2), 893; https://doi.org/10.3390/su18020893 - 15 Jan 2026
Abstract
The rapid increase in electric vehicle (EV) ownership necessitates the adaptation of fuel stations to charging infrastructure and the re-evaluation of capacity planning. In the literature, demand forecasting and installation costs are mostly examined; however, station-scale queue analyses supported by field data remain [...] Read more.
The rapid increase in electric vehicle (EV) ownership necessitates the adaptation of fuel stations to charging infrastructure and the re-evaluation of capacity planning. In the literature, demand forecasting and installation costs are mostly examined; however, station-scale queue analyses supported by field data remain limited. This study aims to examine the integration of EV charging in fuel stations through simulation-based capacity analyses, taking current conditions into account. In this context, a scenario in which one and two dual-hose pumps at a fuel station located on the Turkey–Istanbul E-5 highway side-road is converted into a charging unit has been evaluated using a discrete-event microsimulation model. The analyses were conducted using a discrete event-based microsimulation model. The simulation inputs were derived from field observations and survey data, including the hourly arrival rates of internal combustion engine vehicles (ICEVs), the dwell times at the station, and the charging durations of EVs. In the study, station capacity and service performance were evaluated under scenarios representing EV shares of 5%, 10%, and 20% within the country’s passenger vehicle fleet. Within the scope of the study, the hourly arrival rates and dwell times of ICEVs were determined through field measurements, while EV charging durations were assessed by jointly analyzing field observations and survey data. Simulation results showed that the average number of waiting vehicles increases as the EV share rises; for example, in the 10% EV share scenario, 15.6 (±0.84) EVs were observed waiting within the station, while 34.06 (±1.23) EVs were identified in the 20% scenario. These queues constrict internal circulation within the station, limiting the maneuverability of ICEVs and causing delays in overall service operations. Furthermore, when two dual-hose pumps are replaced with charging units, noticeable increases in waiting times emerge, particularly during the evening peak period. Specifically, 5.88% of ICEVs experienced queuing between 17:00–18:00, rising to 12.33% between 18:00–19:00. In conclusion, this study provides a practical and robust model for short- and medium-term capacity planning and offers data-driven, actionable insights for decision-makers during the transition of fuel stations to EV charging infrastructure. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

23 pages, 3280 KB  
Article
Research on Short-Term Photovoltaic Power Prediction Method Using Adaptive Fusion of Multi-Source Heterogeneous Meteorological Data
by Haijun Yu, Jinjin Ding, Yuanzhi Li, Lijun Wang, Weibo Yuan, Xunting Wang and Feng Zhang
Energies 2026, 19(2), 425; https://doi.org/10.3390/en19020425 - 15 Jan 2026
Abstract
High-precision short-term photovoltaic (PV) power prediction has become a critical technology in ensuring grid accommodation capacity, optimizing dispatching decisions, and enhancing plant economic benefits. This paper proposes a long short-term memory (LSTM)-based short-term PV power prediction method with the genetic algorithm (GA)-optimized adaptive [...] Read more.
High-precision short-term photovoltaic (PV) power prediction has become a critical technology in ensuring grid accommodation capacity, optimizing dispatching decisions, and enhancing plant economic benefits. This paper proposes a long short-term memory (LSTM)-based short-term PV power prediction method with the genetic algorithm (GA)-optimized adaptive fusion of space-based cloud imagery and ground-based meteorological data. The effective integration of satellite cloud imagery is conducted in the PV power prediction system, and the proposed method addresses the issues of low accuracy, poor robustness, and inadequate adaptation to complex weather associated with using a single type of meteorological data for PV power prediction. The multi-source heterogeneous data are preprocessed through outlier detection and missing value imputation. Spearman correlation analysis is employed to identify meteorological attributes highly correlated with PV power output. A dedicated dataset compatible with LSTM algorithm-based prediction models is constructed. An LSTM prediction model with a GA algorithm-based adaptive multi-source heterogeneous data fusion method is proposed, and the ability to construct a precise short-term PV power prediction model is demonstrated. Experimental results demonstrate that the proposed method outperforms single-source LSTM, single-source CNN-LSTM, and dual-source CNN-Transformer models in prediction accuracy, achieving an RMSE of 0.807 kWh and an MAPE of 6.74% on a critical test day. The proposed method enables real-time precision forecasting for grid dispatch centers and lightweight edge deployment at PV plants, enhancing renewable energy integration while effectively mitigating grid instability from power fluctuations. Full article
Show Figures

Figure 1

29 pages, 16318 KB  
Article
A Novel Algorithm for Determining the Window Size in Power Load Prediction
by Haobin Liang, Zefang Song, Yiran Liu and Yiwei Huang
Mathematics 2026, 14(2), 304; https://doi.org/10.3390/math14020304 - 15 Jan 2026
Abstract
The sliding window method is a commonly used data processing in time series forecasting tasks, and determining the appropriate window size is a crucial step in constructing predictive models. However, the current setting of window size parameters is often based on empirical knowledge, [...] Read more.
The sliding window method is a commonly used data processing in time series forecasting tasks, and determining the appropriate window size is a crucial step in constructing predictive models. However, the current setting of window size parameters is often based on empirical knowledge, making the scientific determination of the optimal sliding window size highly significant. This paper proposes an algorithm for optimizing window size based on sample entropy, which is applicable not only to the original undecomposed sequences but also effectively to the decomposed sequences. The proposed algorithm has been validated using the open-source Elia grid data across multiple model architectures, including recurrent (GRU/LSTM) and attention-based (Transformer) networks. Experimental results demonstrate that the algorithm effectively determines an optimal window size of 106. The optimized window consistently leads to superior prediction performance, with the CEEMD-GRU model achieving a MAPE of 0.256, RMSE of 22.529, and MAE of 18.186—representing reductions of over 5% compared to the undecomposed benchmark. Furthermore, the enhancement is more significant for decomposed sequences, and the algorithm’s efficacy is validated across different neural network architectures (e.g., LSTM, GRU, Transformer), confirming its practical utility and generalizability. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

25 pages, 2315 KB  
Article
A New Energy-Saving Management Framework for Hospitality Operations Based on Model Predictive Control Theory
by Juan Huang and Aimi Binti Anuar
Tour. Hosp. 2026, 7(1), 23; https://doi.org/10.3390/tourhosp7010023 - 15 Jan 2026
Abstract
To address the pervasive challenges of resource inefficiency and static management in the hospitality sector, this study proposes a novel management framework that synergistically integrates Model Predictive Control (MPC) with Green Human Resource Management (GHRM). Methodologically, the framework establishes a dynamic closed-loop architecture [...] Read more.
To address the pervasive challenges of resource inefficiency and static management in the hospitality sector, this study proposes a novel management framework that synergistically integrates Model Predictive Control (MPC) with Green Human Resource Management (GHRM). Methodologically, the framework establishes a dynamic closed-loop architecture that cyclically links environmental sensing, predictive optimization, plan execution and organizational learning. The MPC component generates data-driven forecasts and optimal control signals for resource allocation. Crucially, these technical outputs are operationally translated into specific, actionable directives for employees through integrated GHRM practices, including real-time task allocation via management systems, incentives-aligned performance metrics, and structured environmental training. This practical integration ensures that predictive optimization is directly coupled with human behavior. Theoretically, this study redefines hospitality operations as adaptive sociotechnical systems, and advances the hospitality energy-saving management framework by formally incorporating human execution feedback, predictive control theory, and dynamic optimization theory. Empirical validation across a sample of 40 hotels confirms the framework’s effectiveness, demonstrating significant reductions in daily average water consumption by 15.5% and electricity usage by 13.6%. These findings provide a robust, data-driven paradigm for achieving sustainable operational transformations in the hospitality industry. Full article
Show Figures

Figure 1

26 pages, 2786 KB  
Article
Time-Series Modeling and LLM-Based Agents for Peak Energy Management in Smart Campus Environments
by Mossab Batal, Youness Tace, Hassna Bensag, Sanaa El Filali and Mohamed Tabaa
Sustainability 2026, 18(2), 875; https://doi.org/10.3390/su18020875 - 15 Jan 2026
Abstract
A Smart campus increasingly operates on the basis of data-driven operations, but an increasing demand for energy puts their control over costs and sustainability at risk. This study addresses the challenge of anticipating and managing energy consumption peaks in multi-campus environments by proposing [...] Read more.
A Smart campus increasingly operates on the basis of data-driven operations, but an increasing demand for energy puts their control over costs and sustainability at risk. This study addresses the challenge of anticipating and managing energy consumption peaks in multi-campus environments by proposing a hybrid framework that combines advanced time-series forecasting models with a large language model (LLM)-driven multi-agent system. Based on the UNICON dataset, LSTM, CNN, GRU, and a combination architecture are trained and compared in terms of MAE and RMSE. The hybrid configuration achieves the greatest forecasting results by returning the minimum loss values. For the identification of critical periods, we employed a strategy based on median thresholding, which offers a categorization into low, normal, and extreme category, allowing the targeting of peak mitigation actions. We also introduce a multi-agent system based on the LLM, including the data aggregator, the forecaster, and the policy advisor, which create actionable policies informed by context. We also compare LLMs (Qwen-2.5, Gemma-2, Phi-4, Mistral, Llama-3.3) in terms of context accuracy, response relevance, semantic similarity, and retrieval/recall accuracy and fidelity, with Llama-3.3 achieving the best overall results. This framework has shown great potential, not only for energy consumption forecasting but also for developing precise policies on how to effectively manage energy consumption peaks. Full article
(This article belongs to the Section Environmental Sustainability and Applications)
Show Figures

Figure 1

21 pages, 1337 KB  
Article
The Health-Wealth Gradient in Labor Markets: Integrating Health, Insurance, and Social Metrics to Predict Employment Density
by Dingyuan Liu, Qiannan Shen and Jiaci Liu
Computation 2026, 14(1), 22; https://doi.org/10.3390/computation14010022 - 15 Jan 2026
Abstract
Labor market forecasting relies heavily on economic time-series data, often overlooking the “health–wealth” gradient that links population health to workforce participation. This study develops a machine learning framework integrating non-traditional health and social metrics to predict state-level employment density. Methods: We constructed a [...] Read more.
Labor market forecasting relies heavily on economic time-series data, often overlooking the “health–wealth” gradient that links population health to workforce participation. This study develops a machine learning framework integrating non-traditional health and social metrics to predict state-level employment density. Methods: We constructed a multi-source longitudinal dataset (2014–2024) by aggregating county-level Quarterly Census of Employment and Wages (QCEW) data with County Health Rankings to the state level. Using a time-aware split to evaluate performance across the COVID-19 structural break, we compared LASSO, Random Forest, and regularized XGBoost models, employing SHAP values for interpretability. Results: The tuned, regularized XGBoost model achieved strong out-of-sample performance (Test R2 = 0.800). A leakage-safe stacked Ridge ensemble yielded comparable performance (Test R2 = 0.827), while preserving the interpretability of the underlying tree model used for SHAP analysis. Full article
Show Figures

Figure 1

27 pages, 4670 KB  
Article
An Efficient Remote Sensing Index for Soybean Identification: Enhanced Chlorophyll Index (NRLI)
by Dongmei Lyu, Chenlan Lai, Bingxue Zhu, Zhijun Zhen and Kaishan Song
Remote Sens. 2026, 18(2), 278; https://doi.org/10.3390/rs18020278 - 14 Jan 2026
Viewed by 11
Abstract
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we [...] Read more.
Soybean is a key global crop for food and oil production, playing a vital role in ensuring food security and supplying plant-based proteins and oils. Accurate information on soybean distribution is essential for yield forecasting, agricultural management, and policymaking. In this study, we developed an Enhanced Chlorophyll Index (NRLI) to improve the separability between soybean and maize—two spectrally similar crops that often confound traditional vegetation indices. The proposed NRLI integrates red-edge, near-infrared, and green spectral information, effectively capturing variations in chlorophyll and canopy water content during key phenological stages, particularly from flowering to pod setting and maturity. Building upon this foundation, we further introduce a pixel-wise compositing strategy based on the peak phase of NRLI to enhance the temporal adaptability and spectral discriminability in crop classification. Unlike conventional approaches that rely on imagery from fixed dates, this strategy dynamically analyzes annual time-series data, enabling phenology-adaptive alignment at the pixel level. Comparative analysis reveals that NRLI consistently outperforms existing vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Greenness and Water Content Composite Index (GWCCI), across representative soybean-producing regions in multiple countries. It improves overall accuracy (OA) by approximately 10–20 percentage points, achieving accuracy rates exceeding 90% in large, contiguous cultivation areas. To further validate the robustness of the proposed index, benchmark comparisons were conducted against the Random Forest (RF) machine learning algorithm. The results demonstrated that the single-index NRLI approach achieved competitive performance, comparable to the multi-feature RF model, with accuracy differences generally within 1–2%. In some regions, NRLI even outperformed RF. This finding highlights NRLI as a computationally efficient alternative to complex machine learning models without compromising mapping precision. This study provides a robust, scalable, and transferable single-index approach for large-scale soybean mapping and monitoring using remote sensing. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Smart Agriculture and Digital Twins)
Show Figures

Graphical abstract

Back to TopTop