Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (614)

Search Parameters:
Keywords = temperature awareness

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1875 KB  
Article
Impact of Blasting Scenarios for In-Pit Ramp Construction on the Fumes Emission
by Michał Dudek, Michał Dworzak and Andrzej Biessikirski
Sustainability 2026, 18(2), 633; https://doi.org/10.3390/su18020633 - 8 Jan 2026
Abstract
Blasting operations associated with in-pit ramp construction in open-pit mines generate gaseous emissions originating from both explosive detonation and diesel-powered drilling and loading equipment. The research object of this study is the ramp construction process in an operating open-pit quarry, and the objective [...] Read more.
Blasting operations associated with in-pit ramp construction in open-pit mines generate gaseous emissions originating from both explosive detonation and diesel-powered drilling and loading equipment. The research object of this study is the ramp construction process in an operating open-pit quarry, and the objective is to comparatively evaluate gaseous emissions across alternative blasting scenarios to support emission-aware operational decision-making. Five realistic blasting scenarios are assessed using a combined methodology that integrates laboratory fume index data for ANFO, emulsion explosives, and dynamite with diesel-emission estimates derived from non-road mobile machinery inventory factors. Laboratory detonation tests provide standardized upper-bound emission potentials for COx and NOx, while drilling and loading emissions are quantified using a fuel-based inventory approach. The results show that the dominant contribution to total mass emissions arises from diesel combustion during drilling and loading, consistent with studies on real-world non-road mobile machinery inventory factors. Detonation fumes, although chemically concentrated and relevant for short-term exposure risk, represent a smaller share of the mass-based emission budget. Among the explosive types, bulk emulsions consistently exhibit lower toxic-gas emission indices than ANFO, attributable to their more uniform microstructure and a moderated reaction temperature. Dynamite demonstrates the lowest fume potential but is operationally less scalable for large open-pit patterns due to manual loading. Uncertainty analysis indicates that both laboratory-derived fume indices and diesel emission factors introduce systematic variability: laboratory tests tend to overestimate detonation fumes, while inventory-based diesel estimates may underestimate real-world NOx and particulate emissions. Notwithstanding these limitations, the scenario-based framework developed here provides a robust basis for comparative evaluation of blasting strategies during ramp construction. The findings support increased use of emulsion explosives and emphasize the importance of moisture management, field-integrated gas monitoring, and improved characterization of diesel-equipment duty cycles. Full article
(This article belongs to the Special Issue Advanced Materials and Technologies for Environmental Sustainability)
Show Figures

Figure 1

15 pages, 1464 KB  
Review
Convergent Sensing: Integrating Biometric and Environmental Monitoring in Next-Generation Wearables
by Maria Guarnaccia, Antonio Gianmaria Spampinato, Enrico Alessi and Sebastiano Cavallaro
Biosensors 2026, 16(1), 43; https://doi.org/10.3390/bios16010043 - 4 Jan 2026
Viewed by 232
Abstract
The convergence of biometric and environmental sensing represents a transformative advancement in wearable technology, moving beyond single-parameter tracking towards a holistic, context-aware paradigm for health monitoring. This review comprehensively examines the landscape of multi-modal wearable devices that simultaneously capture physiological data, such as [...] Read more.
The convergence of biometric and environmental sensing represents a transformative advancement in wearable technology, moving beyond single-parameter tracking towards a holistic, context-aware paradigm for health monitoring. This review comprehensively examines the landscape of multi-modal wearable devices that simultaneously capture physiological data, such as electrodermal activity (EDA), electrocardiogram (ECG), heart rate variability (HRV), and body temperature, alongside environmental exposures, including air quality, ambient temperature, and atmospheric pressure. We analyze the fundamental sensing technologies, data fusion methodologies, and the critical importance of contextualizing physiological signals within an individual’s environment to disambiguate health states. A detailed survey of existing commercial and research-grade devices highlights a growing, yet still limited, integration of these domains. As a central case study, we present an integrated prototype, which exemplifies this approach by fusing data from inertial, environmental, and physiological sensors to generate intuitive, composite indices for stress, fitness, and comfort, visualized via a polar graph. Finally, we discuss the significant challenges and future directions for this field, including clinical validation, data security, and power management, underscoring the potential of convergent sensing to revolutionize personalized, predictive healthcare. Full article
(This article belongs to the Special Issue Wearable Sensors and Systems for Continuous Health Monitoring)
Show Figures

Figure 1

19 pages, 1817 KB  
Article
Volatiles Generated in the Pyrolysis of Greenhouse Vegetable Waste
by Sergio Medina, Ullrich Stahl, Fernando Gómez, Angela N. García and Antonio Marcilla
Biomass 2026, 6(1), 2; https://doi.org/10.3390/biomass6010002 - 4 Jan 2026
Viewed by 102
Abstract
Waste valorization is a necessary activity for the development of the circular economy. Pyrolysis as a waste valorization pathway has been extensively studied, as it allows for obtaining different fractions with diverse and valuable applications. The joint analysis of results generated by thermogravimetry [...] Read more.
Waste valorization is a necessary activity for the development of the circular economy. Pyrolysis as a waste valorization pathway has been extensively studied, as it allows for obtaining different fractions with diverse and valuable applications. The joint analysis of results generated by thermogravimetry (TGA) and analytical pyrolysis (Py-GC/MS) allows for the characterization of waste materials and the assessment of their potential as sources of energy, value-added chemicals and biochar, as well as providing awareness for avoiding potential harmful emissions if the process is performed without proper control or management. In the present study, these techniques were employed on three greenhouse plant residues (broccoli, tomato, and zucchini). Analytical pyrolysis was conducted at eight temperatures ranging from 100 to 800 °C, investigating the evolution of compounds grouped by their functional groups, as well as the predominant compounds of each biomass. It was concluded that the decomposition of biomass initiates between 300–400 °C, with the highest generation of volatiles occurring around 500–600 °C, where pyrolytic compounds span a wide range of molecular weights. The production of organic acids, ketones, alcohols, and furan derivatives peaks around 500 °C, whereas alkanes, alkenes, benzene derivatives, phenols, pyrroles, pyridines, and other nitrogenous compounds increase with temperature up to 700–800 °C. The broccoli biomass exhibited a higher yield of alcohols and furan derivatives, while zucchini and tomato plants, compared to broccoli, were notable for their nitrogen-containing groups (pyridines, pyrroles, and other nitrogenous compounds). Full article
Show Figures

Figure 1

29 pages, 4094 KB  
Article
Hybrid LSTM–DNN Architecture with Low-Discrepancy Hypercube Sampling for Adaptive Forecasting and Data Reliability Control in Metallurgical Information-Control Systems
by Jasur Sevinov, Barnokhon Temerbekova, Gulnora Bekimbetova, Ulugbek Mamanazarov and Bakhodir Bekimbetov
Processes 2026, 14(1), 147; https://doi.org/10.3390/pr14010147 - 1 Jan 2026
Viewed by 228
Abstract
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling [...] Read more.
The study focuses on the design of an intelligent information-control system (ICS) for metallurgical production, aimed at robust forecasting of technological parameters and automatic self-adaptation under noise, anomalies, and data drift. The proposed architecture integrates a hybrid LSTM–DNN model with low-discrepancy hypercube sampling using Sobol and Halton sequences to ensure uniform coverage of operating conditions and the hyperparameter space. The processing pipeline includes preprocessing and temporal synchronization of measurements, a parameter identification module, anomaly detection and correction using an ε-threshold scheme, and a decision-making and control loop. In simulation scenarios modeling the dynamics of temperature, pressure, level, and flow (1 min sampling interval, injected anomalies, and measurement noise), the hybrid model outperformed GRU and CNN architectures: a determination coefficient of R2 > 0.92 was achieved for key indicators, MAE and RMSE improved by 7–15%, and the proportion of unreliable measurements after correction decreased to <2% (compared with 8–12% without correction). The experiments also demonstrated accelerated adaptation during regime changes. The scientific novelty lies in combining recurrent memory and deep nonlinear approximation with deterministic experimental design in the hypercube of states and hyperparameters, enabling reproducible self-adaptation of the ICS and increased noise robustness without upgrading the measurement hardware. Modern metallurgical information-control systems operate under non-stationary regimes and limited measurement reliability, which reduces the robustness of conventional forecasting and decision-support approaches. To address this issue, a hybrid LSTM–DNN architecture combined with low-discrepancy hypercube probing and anomaly-aware data correction is proposed. The proposed approach is distinguished by the integration of hybrid neural forecasting, deterministic hypercube-based adaptation, and anomaly-aware data correction within a unified information-control loop for non-stationary industrial processes. Full article
Show Figures

Figure 1

30 pages, 1062 KB  
Article
Context-Aware Emotion Gating and Modulation for Fine-Grained Sentiment Classification
by Anupama Udayangani Gunathilaka Thennakoon Mudiyanselage, Jinglan Zhang and Yeufeng Li
Mach. Learn. Knowl. Extr. 2026, 8(1), 9; https://doi.org/10.3390/make8010009 - 31 Dec 2025
Viewed by 197
Abstract
Fine-grained sentiment analysis requires a deep understanding of emotional intensity in the text to distinguish subtle shifts in polarity, such as moving from positive to more positive or from negative to more negative, and to clearly separate emotionally neutral statements from polarized expressions, [...] Read more.
Fine-grained sentiment analysis requires a deep understanding of emotional intensity in the text to distinguish subtle shifts in polarity, such as moving from positive to more positive or from negative to more negative, and to clearly separate emotionally neutral statements from polarized expressions, especially in short or contextually sparse texts such as social media posts. While recent advances combine deep semantic encoding with context-aware architectures, such as Bidirectional Long Short-Term Memory (BiLSTM) and Convolutional Neural Networks (CNNs), many models still struggle to detect nuanced emotional cues, particularly in short texts, due to the limited contextual information, subtle polarity shifts, and overlapping affective expressions, which ultimately hinder performance and reduce a model’s ability to make fine-grained sentiment distinctions. To address this challenge, we propose an Emotion- Aware Bidirectional Gating Network (Electra-BiG-Emo) that improves sentiment classification and subtle sentiment differentiation by learning contextual emotion representations and refining them with auxiliary emotional signals. Our model employs an asymmetric gating mechanism within a BiLSTM to dynamically capture both early and late contextual semantics. The gates are temperature-controlled, enabling adaptive modulation of emotion priors, derived from Reddit post datasets to enhance context-aware emotion representation. These soft emotional signals are reweighted based on context, enabling the model to amplify or suppress emotions in the presence of an ambiguous context. This approach advances fine-grained sentiment understanding by embedding emotional awareness directly into the learning process. Ablation studies confirm the complementary roles of semantic encoding, context modeling, and emotion modulation. Further our approach achieves competitive performance on Sem- Val 2017 Task 4c, Twitter US Airline, and SST5 datasets compared with state-of-the-art methods, particularly excelling in detecting subtle emotional variations and classifying short, semantically sparse texts. Gating and modulation analyses reveal that emotion-aware gating enhances interpretability and reinforces the value of explicit emotion modeling in fine-grained sentiment tasks. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

30 pages, 949 KB  
Article
WinStat: A Family of Trainable Positional Encodings for Transformers in Time Series Forecasting
by Cristhian Moya-Mota, Ignacio Aguilera-Martos, Diego García-Gil and Julián Luengo
Mach. Learn. Knowl. Extr. 2026, 8(1), 7; https://doi.org/10.3390/make8010007 - 29 Dec 2025
Viewed by 199
Abstract
Transformers for time series forecasting rely on positional encoding to inject temporal order into the permutation-invariant self-attention mechanism. Classical sinusoidal absolute encodings are fixed and purely geometric; learnable absolute encodings often overfit and fail to extrapolate, while relative or advanced schemes can impose [...] Read more.
Transformers for time series forecasting rely on positional encoding to inject temporal order into the permutation-invariant self-attention mechanism. Classical sinusoidal absolute encodings are fixed and purely geometric; learnable absolute encodings often overfit and fail to extrapolate, while relative or advanced schemes can impose substantial computational overhead without being sufficiently tailored to temporal data. This work introduces a family of window-statistics positional encodings that explicitly incorporate local temporal semantics into the representation of each timestamp. The base variant (WinStat) augments inputs with statistics computed over a sliding window; WinStatLag adds explicit lag-difference features; and hybrid variants (WinStatFlex, WinStatTPE, WinStatSPE) learn soft mixtures of window statistics with absolute, learnable, and semantic positional signals, preserving the simplicity of additive encodings while adapting to local structure and informative lags. We evaluate proposed encodings on four heterogeneous benchmarks against state-of-the-art proposals: Electricity Transformer Temperature (hourly variants), Individual Household Electric Power Consumption, New York City Yellow Taxi Trip Records, and a large-scale industrial time series from heavy machinery. All experiments use a controlled Transformer backbone with full self-attention to isolate the effect of positional information. Across datasets, the proposed methods consistently reduce mean squared error and mean absolute error relative to a strong Transformer baseline with sinusoidal positional encoding and state-of-the-art encodings for time series, with WinStatFlex and WinStatTPE emerging as the most effective variants. Ablation studies that randomly shuffle decoder inputs markedly degrade the proposed methods, supporting the conclusion that their gains arise from learned order-aware locality and semantic structure rather than incidental artifacts. A simple and reproducible heuristic for setting the sliding-window length—roughly one quarter to one third of the input sequence length—provides robust performance without the need for exhaustive tuning. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

29 pages, 10446 KB  
Article
Safety Risk Analysis of a Construction Project on a Tropical Island
by Bo Huang, Junwu Wang, Jun Huang, Chunbao Yuan and Sijun Lv
Appl. Sci. 2026, 16(1), 271; https://doi.org/10.3390/app16010271 - 26 Dec 2025
Viewed by 174
Abstract
Construction projects on tropical islands face a high incidence of safety accidents due to complex environmental conditions, construction technologies, and varying levels of worker safety awareness. Traditional risk analysis frameworks, constrained by narrow analytical perspectives, struggle to account for the escalating uncertainties and [...] Read more.
Construction projects on tropical islands face a high incidence of safety accidents due to complex environmental conditions, construction technologies, and varying levels of worker safety awareness. Traditional risk analysis frameworks, constrained by narrow analytical perspectives, struggle to account for the escalating uncertainties and safety perturbations inherent in tropical island construction processes. To address this gap, and to improve upon both Health Safety and Environment Management System (HSE) and Bayesian Networks (BN) methods, an IHIB model for construction safety risk analysis of tropical island buildings was established. The Improve Health Safety and Environment Management System (IHSE) method constructs an indicator system from six dimensions: institutional, health, organizational, safety, environmental, and emergency response factors. The Improved Bayesian network (IBN)method, by introducing fuzzy set theory and an improved similarity aggregation method, more accurately infers the influencing factors and the most probable causal chains for construction safety on tropical islands. Taking the Sanya Haitang Bay construction project as a case study, the IHIB analysis model reveals that high temperatures and strong winds are the decisive factors influencing construction safety risks on tropical islands. The findings contribute to proactive risk prevention and mitigation, offering practical guidance for enhancing construction safety management on tropical islands. Full article
(This article belongs to the Special Issue Risk Assessment for Hazards in Infrastructures)
Show Figures

Figure 1

23 pages, 4379 KB  
Article
Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) for Optimized Indoor Environment Modeling in Sports Halls
by Ping Wang, Xiaolong Chen, Hongfeng Zhang, Cora Un In Wong and Bin Long
Buildings 2026, 16(1), 113; https://doi.org/10.3390/buildings16010113 - 26 Dec 2025
Viewed by 270
Abstract
We propose a Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) architecture for optimized indoor environment modeling in sports halls, addressing the computational and scalability challenges of high-resolution spatiotemporal data processing. The sports hall is partitioned into distinct zones, each processed by dedicated CNN branches to [...] Read more.
We propose a Hybrid Parallel Temporal–Spatial CNN-LSTM (HPTS-CL) architecture for optimized indoor environment modeling in sports halls, addressing the computational and scalability challenges of high-resolution spatiotemporal data processing. The sports hall is partitioned into distinct zones, each processed by dedicated CNN branches to extract localized spatial features, while hierarchical LSTMs capture both short-term zone-specific dynamics and long-term inter-zone dependencies. The system integrates model and data parallelism to distribute workloads across specialized hardware, dynamically balanced to minimize computational bottlenecks. A gated fusion mechanism combines spatial and temporal features adaptively, enabling robust predictions of environmental parameters such as temperature and humidity. The proposed method replaces monolithic CNN-LSTM pipelines with a distributed framework, significantly improving efficiency without sacrificing accuracy. Furthermore, the architecture interfaces seamlessly with existing sensor networks and control systems, prioritizing critical zones through a latency-aware scheduler. Implemented on NVIDIA Jetson AGX Orin edge devices and Google Cloud TPU v4 pods, HPTS-CL demonstrates superior performance in real-time scenarios, leveraging lightweight EfficientNetV2-S for CNNs and IndRNN cells for LSTMs to mitigate gradient vanishing. Experimental results validate the system’s ability to handle large-scale, high-frequency sensor data while maintaining low inference latency, making it a practical solution for intelligent indoor environment optimization. The novelty lies in the hybrid parallelism strategy and hierarchical temporal modeling, which collectively advance the state of the art in distributed spatiotemporal deep learning. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

36 pages, 510 KB  
Article
Comparative Evaluation of GPT-4o, GPT-OSS-120B and Llama-3.1-8B-Instruct Language Models in a Reproducible CV-to-JSON Extraction Pipeline
by Marcin Nawalny, Mateusz Łępicki, Tomasz Latkowski, Sebastian Bujak, Michał Bukowski, Bartosz Świderski, Grzegorz Baranik, Bogusz Nowak, Robert Zakowicz, Łukasz Dobrakowski, Agnieszka Oczeretko, Piotr Sadowski, Konrad Szlaga, Bartłomiej Kubica and Jarosław Kurek
Appl. Sci. 2026, 16(1), 217; https://doi.org/10.3390/app16010217 - 24 Dec 2025
Viewed by 516
Abstract
Recruitment automation increasingly relies on Large Language Models (LLMs) for extracting structured information from unstructured CVs and job postings. However, production data often arrive as heterogeneous, privacy-sensitive PDFs, limiting reproducibility and compliance. This study introduces a deterministic, GDPR-aligned pipeline that converts recruitment documents [...] Read more.
Recruitment automation increasingly relies on Large Language Models (LLMs) for extracting structured information from unstructured CVs and job postings. However, production data often arrive as heterogeneous, privacy-sensitive PDFs, limiting reproducibility and compliance. This study introduces a deterministic, GDPR-aligned pipeline that converts recruitment documents into structured, anonymized Markdown and subsequently into validated JSON ready for downstream AI processing. The workflow combines the Docling PDF-to-Markdown converter with a two-pass anonymization protocol and evaluates three LLM back-ends—GPT-4o (Azure, frozen proprietary), GPT-OSS-120B and Llama-3.1-8B-Instruct—using identical prompts and schema constraints under near-zero-temperature decoding. Each model’s output was assessed across 2280 multilingual CVs using two complementary metrics: reference-based completeness and content similarity. The proprietary GPT-4o achieved perfect schema coverage and served as the reproducibility baseline, while the open-weight models reached 73–79% completeness and 59–72% content similarity depending on section complexity. Llama-3.1-8B-Instruct performed strongly on standardized sections such as contact and legal, whereas GPT-OSS-120B better-handled less frequent narrative fields. The results demonstrate that fully deterministic, auditable document extraction is achievable with both proprietary and open LLMs when guided by strong schema validation and anonymization. The proposed pipeline bridges the gap between document ingestion and reliable, bias-aware data preparation for AI-driven recruitment systems. Full article
Show Figures

Figure 1

25 pages, 5215 KB  
Article
Explainable Predictive Maintenance of Marine Engines Using a Hybrid BiLSTM-Attention-Kolmogorov Arnold Network
by Alexandros S. Kalafatelis, Georgios Levis, Anastasios Giannopoulos, Nikolaos Tsoulakos and Panagiotis Trakadas
J. Mar. Sci. Eng. 2026, 14(1), 32; https://doi.org/10.3390/jmse14010032 - 24 Dec 2025
Viewed by 312
Abstract
Predictive maintenance for marine engines requires forecasts that are both accurate and technically interpretable. This work introduces BEACON, a hybrid architecture that combines a bidirectional long short-term memory encoder with attention pooling, a Kolmogorov Arnold network and a lightweight multilayer perceptron for cylinder-level [...] Read more.
Predictive maintenance for marine engines requires forecasts that are both accurate and technically interpretable. This work introduces BEACON, a hybrid architecture that combines a bidirectional long short-term memory encoder with attention pooling, a Kolmogorov Arnold network and a lightweight multilayer perceptron for cylinder-level exhaust gas temperature forecasting, evaluated in both centralized and federated learning settings. On operational data from a bulk carrier, BEACON outperformed strong state-of-the-art baselines, achieving an RMSE of 0.5905, MAE of 0.4713 and R2 of approximately 0.95, while producing interpretable response curves and stable SHAP rankings across engine load regimes. A second contribution is the explicit evaluation of explanation stability in a federated learning setting, where BEACON maintained competitive accuracy and attained mean Spearman correlations above 0.8 between client-specific SHAP rankings, whereas baseline models exhibited substantially lower agreement. These results indicate that the proposed hybrid design provides an accurate and explanation-stable foundation for privacy-aware predictive maintenance of marine engines. Full article
Show Figures

Figure 1

30 pages, 5219 KB  
Article
Dynamic Multi-Output Stacked-Ensemble Model with Hyperparameter Optimization for Real-Time Forecasting of AHU Cooling-Coil Performance
by Md Mahmudul Hasan, Pasidu Dharmasena and Nabil Nassif
Energies 2026, 19(1), 82; https://doi.org/10.3390/en19010082 - 23 Dec 2025
Viewed by 323
Abstract
This study introduces a dynamic, multi-output stacking framework for real-time forecasting of HVAC cooling-coil behavior in air-handling units. The dynamic model encodes short-horizon system memory with input/target lags and rolling psychrometric features and enforces leakage-free, time-aware validation. Four base learners—Random Forest, Bagging (DT), [...] Read more.
This study introduces a dynamic, multi-output stacking framework for real-time forecasting of HVAC cooling-coil behavior in air-handling units. The dynamic model encodes short-horizon system memory with input/target lags and rolling psychrometric features and enforces leakage-free, time-aware validation. Four base learners—Random Forest, Bagging (DT), XGBoost, and ANN—are each optimized with an Optuna hyperparameter tuner that systematically explores architecture and regularization to identify data-specific, near-optimal configurations. Their out-of-fold predictions are combined through a Ridge-based stacker, yielding state-of-the-art accuracy for supply-air temperature and chilled water leaving temperature (R2 up to 0.9995, NRMSE as low as 0.0105), consistently surpassing individual models. Novelty lies in the explicit dynamics encoding aligned with coil heat and mass-transfer behavior, physics-consistent feature prioritization, and a robust multi-target stacking design tailored for HVAC transients. The findings indicate that this hyperparameter-tuned dynamic framework can serve as a high-fidelity surrogate for cooling-coil performance, supporting set-point optimization, supervisory control, and future extensions to virtual sensing or fault-diagnostics workflows in industrial AHUs. Full article
(This article belongs to the Special Issue Performance Analysis of Building Energy Efficiency)
Show Figures

Figure 1

19 pages, 6483 KB  
Article
Mapping Forest Climate-Sensitivity Belts in a Mountainous Region of Namyangju, South Korea, Using Satellite-Derived Thermal and Vegetation Phenological Variability
by Joon Kim, Whijin Kim, Woo-Kyun Lee and Moonil Kim
Forests 2026, 17(1), 14; https://doi.org/10.3390/f17010014 - 22 Dec 2025
Viewed by 343
Abstract
Mountain forests play a key role in buffering local climate, yet their climate sensitivity is seldom mapped in a way that is directly usable for spatial planning. This study investigates how phenological thermal and vegetation variability are organized within the forested landscape of [...] Read more.
Mountain forests play a key role in buffering local climate, yet their climate sensitivity is seldom mapped in a way that is directly usable for spatial planning. This study investigates how phenological thermal and vegetation variability are organized within the forested landscape of Namyangju, a mountainous region in central Korea, and derives spatial indicators of forest climate sensitivity. Using monthly, cloud-screened Landsat-8/9 land surface temperature (LST) and normalized difference vegetation index (NDVI) images over a recent multi-year period, we calculated phenological coefficients of variation for 34,123 forest grid cells and applied local clustering analysis to identify belts of high and low variability. Forest areas where LST and NDVI variability simultaneously occupied the upper tail of their distributions (top 5%/10%/20%) were interpreted as climate-sensitivity hotspots, whereas co-located coldspots were treated as microclimatic refugia. Across the mountainous terrain, sensitivity hotspots formed continuous belts along high-elevation ridges and steep, dissected slopes, while coldspots were concentrated in sheltered valley floors. Notably, the most sensitive belts were dominated by high-elevation conifer stands, despite the limited seasonal fluctuation typically expected in evergreen canopies. This pattern suggests that elevation strongly amplifies the coupling between thermal responsiveness and vegetation health, whereas valley-bottom forests act as stabilizers that maintain comparatively constant microclimatic and phenological conditions. We refer to these patterns as “forest climate-sensitivity belts,” which translate satellite observations into spatially explicit information on where climate-buffering functions are most vulnerable or resilient. Incorporating climate-sensitivity belts into forest plans and adaptation strategies can guide elevation-aware species selection in new afforestation, targeted restoration and fuel-load management in upland sensitivity zones, and the protection of valley refugia that support biodiversity, thermal buffering, and hydrological regulation. Because the framework relies on standard satellite products and transparent calculations, it can be updated as new imagery becomes available and transferred to other seasonal, mountainous regions, providing a practical basis for climate-resilient forest planning. Full article
Show Figures

Figure 1

18 pages, 8608 KB  
Article
Self-Referencing Digital Twin for Thermal and Task Management in Package Stacked ESP32-S3 Microcontrollers with Mixture-of-Experts and Neural Networks
by Yi Liu, Parth Sandeepbhai Shah, Tian Xia and Dryver Huston
Computers 2026, 15(1), 4; https://doi.org/10.3390/computers15010004 - 21 Dec 2025
Viewed by 223
Abstract
Thermal limitations restrict the performance of low-cost, vertically stacked embedded systems. This paper presents a self-referencing digital twin framework for thermal and task management in a multi-device ESP32-S3 stack. The system combines a Mixture-of-Experts (MoE) model for task allocation with a neural network [...] Read more.
Thermal limitations restrict the performance of low-cost, vertically stacked embedded systems. This paper presents a self-referencing digital twin framework for thermal and task management in a multi-device ESP32-S3 stack. The system combines a Mixture-of-Experts (MoE) model for task allocation with a neural network for short-term temperature prediction. Acting as a lightweight digital replica of the physical stack, the digital twin continuously monitors device states, forecasts thermal behavior 30 s into the future, and adapts workload distribution accordingly. The MoE model evaluates each device individually and asynchronously, estimating the portion of workload it should receive based on current state features including SoC temperature, CPU frequency, stack position, and recent task history. A separate neural network predicts future temperatures using real-time data from local and neighboring devices, enabling proactive thermal-aware scheduling. Training data for both models is collected through controlled experiments involving fixed-frequency operation and structured frequency switching with idle phases. All predictions and control actions are driven by in-built sensor feedback from the ESP32-S3 microcontrollers. The resulting digital twin supports distributed task scheduling based on temperature and works well in simple, low-cost edge systems with heat constraints. In one-hour experiments on a 6 ESP32-S3 stack, the proposed scheduling method completes up to 572 computation rounds at a 50 °C temperature limit, compared with 493 and 542 rounds under logistic regression based control and 534 rounds at fixed 240 MHz operation, while keeping peak temperature at 51 °C. Full article
Show Figures

Figure 1

25 pages, 21291 KB  
Article
Lithium-Ion Battery Open-Circuit Voltage Analysis for Extreme Temperature Applications
by Nick Nguyen and Balakumar Balasingam
Energies 2026, 19(1), 27; https://doi.org/10.3390/en19010027 - 20 Dec 2025
Viewed by 564
Abstract
Accurate estimation of the open-circuit voltage (OCV) as a function of state of charge (SOC) is critical for reliable battery-management system (BMS) design in lithium-ion battery applications. However, at low temperatures, polarization effects distort the measured OCV–SOC profile due to premature voltage cutoffs [...] Read more.
Accurate estimation of the open-circuit voltage (OCV) as a function of state of charge (SOC) is critical for reliable battery-management system (BMS) design in lithium-ion battery applications. However, at low temperatures, polarization effects distort the measured OCV–SOC profile due to premature voltage cutoffs during low-rate testing. This paper presents an offsetting-based correction method that reconstructs the truncated portions of the OCV curve by extrapolating the charge/discharge data beyond the cutoff points using simple voltage offsets. The approach is applied entirely in post-processing, requiring no modification to standard test protocols. Experimental validation using Samsung EB575152 Li-ion cells across a wide temperature range (−25 °C to 50 °C) demonstrates that the method restores the full OCV span, reduces apparent capacity loss, and improves consistency across cells and temperatures. The proposed technique offers a practical and effective enhancement to standard OCV testing procedures for temperature-aware SOC modeling. Full article
(This article belongs to the Section E: Electric Vehicles)
Show Figures

Figure 1

25 pages, 2770 KB  
Article
Analysis of the Travelling Time According to Weather Conditions Using Machine Learning Algorithms
by Gülçin Canbulut
Appl. Sci. 2026, 16(1), 6; https://doi.org/10.3390/app16010006 - 19 Dec 2025
Viewed by 217
Abstract
A large share of the global population now lives in urban areas, which creates growing challenges for city life. Local authorities are seeking ways to enhance urban livability, with transportation emerging as a major focus. Developing smart public transit systems is therefore a [...] Read more.
A large share of the global population now lives in urban areas, which creates growing challenges for city life. Local authorities are seeking ways to enhance urban livability, with transportation emerging as a major focus. Developing smart public transit systems is therefore a key priority. Accurately estimating travel times is essential for managing transport operations and supporting strategic decisions. Previous studies have used statistical, mathematical, or machine learning models to predict travel time, but most examined these approaches separately. This study introduces a hybrid framework that combines statistical regression models and machine learning algorithms to predict public bus travel times. The analysis is based on 1410 bus trips recorded between November 2021 and July 2022 in Kayseri, Turkey, including detailed meteorological and operational data. A distinctive aspect of this research is the inclusion of weather variables—temperature, humidity, precipitation, air pressure, and wind speed—which are often neglected in the literature. Additionally, sensitivity analyses are conducted by varying k values in the K-nearest neighbors (KNN) algorithm and threshold values for outlier detection to test model robustness. Among the tested models, CatBoost achieved the best performance with a mean squared error (MSE) of approximately 18.4, outperforming random forest (MSE = 25.3) and XGBoost (MSE = 23.9). The empirical results show that the CatBoost algorithm consistently achieves the lowest mean squared error across different preprocessing and parameter settings. Overall, this study presents a comprehensive and environmentally aware approach to travel time prediction, contributing to the advancement of intelligent and adaptive urban transportation systems. Full article
Show Figures

Figure 1

Back to TopTop