Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,550)

Search Parameters:
Keywords = stochastic models

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3351 KB  
Article
A Physics-Constrained Residual Learning Framework for Robust Freeway Traffic Prediction
by Haotao Lv, Xiwen Lou, Jingu Mou, Markos Papageorgiou, Zhengfeng Huang and Pengjun Zheng
Sustainability 2026, 18(7), 3228; https://doi.org/10.3390/su18073228 (registering DOI) - 25 Mar 2026
Abstract
Accurate freeway Improvements in traffic state prediction accuracy and enhanced stability enable more proactive traffic control and demand management strategies, thereby reducing congestion spillover effects, unnecessary acceleration–deceleration cycles, and the resulting fuel consumption and emissions. Yet, this remains challenging due to the interplay [...] Read more.
Accurate freeway Improvements in traffic state prediction accuracy and enhanced stability enable more proactive traffic control and demand management strategies, thereby reducing congestion spillover effects, unnecessary acceleration–deceleration cycles, and the resulting fuel consumption and emissions. Yet, this remains challenging due to the interplay between deterministic traffic flow mechanisms and stochastic disturbances. Purely data-driven models suffer from error accumulation under out-of-distribution conditions, while physics-based models lack flexibility in capturing nonlinear deviations. This paper proposes MDURP, a physics-constrained residual learning framework that reformulates prediction as a residual-space learning problem. A calibrated Cell Transmission Model generates a physically admissible baseline; deep learning models are then restricted to learning the residuals. Wavelet decomposition and GARCH volatility modeling address the multi-scale and heteroskedastic characteristics of these residuals. Experimental results demonstrate that MDURP consistently outperforms baseline models, reducing MAE by an average of 6.8%, RMSE by an average of 4%. The framework also suppresses long-term error accumulation, with MAPE escalation slowing from 0.79% to 0.58% per step. These gains confirm that anchoring deep learning within a physics-defined residual space enhances both accuracy and stability. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

19 pages, 2030 KB  
Article
Understanding Regional and Stylistic Diversity in Chinese Rural Paper-Cutting Through Convolutional Neural Network-Based Image Classification
by Xiaochu Wu, Xiaoyue Yin, Xiaofeng Chen, Xudong You, Fang Zhang and Yi Xiao
Appl. Sci. 2026, 16(7), 3174; https://doi.org/10.3390/app16073174 - 25 Mar 2026
Abstract
As an important component of Chinese folk art, rural paper-cutting embodies rich regional cultural connotations and distinctive aesthetic expressions. In this study, a Chinese rural paper-cutting image dataset covering multiple regions and artistic styles was constructed, and a convolutional neural network (CNN)-based framework [...] Read more.
As an important component of Chinese folk art, rural paper-cutting embodies rich regional cultural connotations and distinctive aesthetic expressions. In this study, a Chinese rural paper-cutting image dataset covering multiple regions and artistic styles was constructed, and a convolutional neural network (CNN)-based framework was proposed for regional and stylistic identification of paper-cutting works. Five representative mainstream CNN models were evaluated for both tasks. For regional classification, all models achieved high accuracy, with EfficientNet-B1 attaining the highest accuracy of 91.46%. The style classification task was more challenging due to subtle visual differences, with MobileNetV3-Small achieving the highest accuracy of 73.20%. In addition, t-distributed stochastic neighbor embedding (t-SNE) visualizations further confirmed that the models were able to effectively distinguish different regional and stylistic categories in high-dimensional space. To enhance model interpretability, Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to visualize the optimal models. The results show that the CNNs consistently focus on core structural features of paper-cutting works, suggesting that CNNs can capture visually and culturally meaningful features. Overall, this study demonstrates the feasibility of applying CNNs to the analysis of traditional folk art and provides a practical technical pathway for digital management, intelligent classification, and educational dissemination of rural paper-cutting art. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
53 pages, 51169 KB  
Article
Detection and Comparative Evaluation of Noise Perturbations in Simulated Dynamical Systems and ECG Signals Using Complexity-Based Features
by Kevin Mallinger, Sebastian Raubitzek, Sebastian Schrittwieser and Edgar Weippl
Mach. Learn. Knowl. Extr. 2026, 8(4), 85; https://doi.org/10.3390/make8040085 - 25 Mar 2026
Abstract
Noise contamination is a common challenge in the analysis of time series data, where stochastic perturbations can obscure deterministic dynamics and complicate the interpretation of signals from chaotic and physiological systems. Reliable identification of noise regimes and their intensity is therefore essential for [...] Read more.
Noise contamination is a common challenge in the analysis of time series data, where stochastic perturbations can obscure deterministic dynamics and complicate the interpretation of signals from chaotic and physiological systems. Reliable identification of noise regimes and their intensity is therefore essential for robust analysis of dynamical and biomedical signals, where incorrect attribution of stochastic perturbations can lead to misleading interpretations of system behavior. For this reason, the present study examines the role of complexity-based descriptors for identifying stochastic perturbations in time series and analyzes how these metrics respond to different noise regimes across heterogeneous dynamical systems. A supervised learning approach based on complexity descriptors was developed to analyze controlled perturbations in multiple signal types. Gaussian, pink, and low-frequency noise disturbances were injected at predefined intensity levels into the Rössler and Lorenz chaotic systems, the Hénon map, and synthetic electrocardiogram signals, while AR(1) processes were used for validation on inherently stochastic signals. From these systems, eighteen entropy-based, fractal, statistical, and singular value decomposition-based complexity metrics were extracted from either raw signals or reconstructed phase spaces. These features were used to perform three classification tasks that capture different aspects of noise characterization, including detecting the presence of noise, identifying the perturbation type, and discriminating between different noise intensities. In addition to predictive modeling, the study evaluates the complexity profiles and feature relevance of the metrics under varying perturbation regimes. The results show that no single complexity metric consistently discriminates noise regimes across all systems. Instead, system-specific relevance patterns emerge. Under given experimental constraints (data partitioning, machine learning algorithm, etc.), Approximate Entropy provides the strongest discrimination for the Lorenz system and the Hénon map, the Coefficient of Variation, Sample and Permutation Entropy dominate classification for ECG signals, and the Condition Number and Variance of first derivative together with Fisher Information are most informative for the Rössler system. Across all datasets, the proposed framework achieves an average accuracy of 99% for noise presence detection, 98.4% for noise type classification, and 98.5% for noise intensity classification. These findings demonstrate that complexity metrics capture structural and statistical signatures of stochastic perturbations across a diverse set of dynamic systems. Full article
14 pages, 6712 KB  
Article
An Adaptive Sticky Hidden Markov Model for Robust State Inference in Non-Stationary Physiological Time Series
by Qizheng Wang, Yuping Wang, Shuai Zhao, Yuhan Wu and Shengjie Li
Mathematics 2026, 14(7), 1107; https://doi.org/10.3390/math14071107 - 25 Mar 2026
Abstract
The accurate inference of hidden states from non-stationary physiological signals remains a significant challenge in stochastic process modeling. This paper proposes an Adaptive Sticky Hidden Markov Model (Sticky-HMM) framework designed to enhance the robustness of state decoding in noisy environments. To address the [...] Read more.
The accurate inference of hidden states from non-stationary physiological signals remains a significant challenge in stochastic process modeling. This paper proposes an Adaptive Sticky Hidden Markov Model (Sticky-HMM) framework designed to enhance the robustness of state decoding in noisy environments. To address the “state-flickering” issue inherent in traditional HMMs, we incorporate a “Sticky” parameter into the transition matrix, imposing a temporal penalty on spurious state switching to maintain continuity. Furthermore, we introduce a Dynamic Prior Strategy that adaptively calibrates self-transition probabilities by mapping frequency-domain features of the observed sequence to the model’s parameter space. The proposed decoding process employs a two-pass refinement strategy and the Viterbi algorithm in the logarithmic domain to ensure numerical stability. The model’s efficacy was validated using a high-fidelity dataset of simulated apnea events. This work provides a computationally efficient and mathematically rigorous approach that demonstrates strong potential for long-term respiratory health monitoring. Full article
(This article belongs to the Special Issue Machine Learning and Graph Neural Networks)
Show Figures

Figure 1

27 pages, 3151 KB  
Article
Techno-Economic Evaluation for Renewable Deployment in Southern Chile: Expanding the Green Hydrogen Frontier
by Teresa Guarda, Silvio F. Durán Velásquez, Alejandro E. Córdova Arellano, Germán Herrera-Vidal, Oscar E. Coronado-Hernández, Gustavo Gatica, Modesto Pérez-Sánchez and Jairo R. Coronado-Hernández
Appl. Sci. 2026, 16(7), 3165; https://doi.org/10.3390/app16073165 - 25 Mar 2026
Abstract
Chile stands out for its renewable energy resources and its commitment to developing green hydrogen. However, achieving cost parity with gray hydrogen remains an obstacle, mainly due to high capital costs and sensitivity to scale. This study assesses the technical and economic feasibility [...] Read more.
Chile stands out for its renewable energy resources and its commitment to developing green hydrogen. However, achieving cost parity with gray hydrogen remains an obstacle, mainly due to high capital costs and sensitivity to scale. This study assesses the technical and economic feasibility of green hydrogen production, using five different plants located in the Magallanes region in the south of the country as a reference. The model integrates a detailed framework of wind generation, PEM electrolysis, compression, and high-pressure storage subsystems, as well as a stochastic economic layer that combines the CAPEX, NPV, and LCOH assessments using Monte Carlo simulations. It also incorporates real-world capacity distributions and probabilistic fluctuations in systems. A sensitivity analysis confirms production scale as the main factor affecting profitability, with a break-even threshold of 0.5 MW. The results show that the LCOH decreases from 7.1 USD to 3.4 USD/kgH2 as capacity increases. The analysis reveals that only 23.88% of small-scale configurations yield positive NPV, underscoring the need for scaling to achieve economic viability. Full article
Show Figures

Figure 1

21 pages, 38078 KB  
Article
Development and Evaluation of a Deep Learning Model for Ovarian Cancer Histotype Classification Using Whole-Slide Imaging
by Dagoberto Pulido and Nathalia Arias-Mendoza
J. Imaging 2026, 12(4), 144; https://doi.org/10.3390/jimaging12040144 - 25 Mar 2026
Abstract
The histopathological classification of ovarian carcinoma is fundamental for patient management. While microscopic evaluation by pathologists is the current diagnostic standard, it is known to be subject to interobserver variability, which can affect consistency in treatment decisions. This study addresses this clinical need [...] Read more.
The histopathological classification of ovarian carcinoma is fundamental for patient management. While microscopic evaluation by pathologists is the current diagnostic standard, it is known to be subject to interobserver variability, which can affect consistency in treatment decisions. This study addresses this clinical need by developing and validating a deep learning-based diagnostic support tool designed to enhance the objectivity and reproducibility of this classification. In this work, we address a key challenge in computational pathology—the tendency of attention mechanisms to overfit by concentrating on limited features—by systematically evaluating a direct regularization method within multiple instance learning (MIL) models. The models were trained and validated using 10-fold cross-validation on a public training set of 538 whole-slide images and further tested on an independent public dataset for the more challenging task of molecular subtype classification. We utilized features from a foundational model pre-trained on histopathology data to represent tissue morphology. Our findings demonstrate that directly regularizing the attention mechanism with a stochastic approach provides a statistically significant improvement in accuracy and generalization, highlighting its power as a robust technique to mitigate overfitting for this clinical task. In direct contrast to the reported variability in manual assessment, our final model achieved high consistency and accuracy, with a balanced accuracy of 0.854 and a Cohen’s Kappa of 0.791. The model also demonstrated strong generalization on the molecular classification task. Its attention mechanism provides visual heatmaps for pathologist review, fostering interpretability and trust. We have developed a highly accurate and generalizable artificial intelligence tool that directly addresses the challenge of interobserver variability in ovarian cancer classification. Its performance highlights the potential for artificial intelligence to serve as a decision support system, standardizing histopathological assessment. Full article
Show Figures

Figure 1

19 pages, 1849 KB  
Article
Stochastic Robust Trading Strategy for Multiple Virtual Power Plants Led by a Public Energy Storage Station
by Yanjun Dong, Tuo Li, Juan Su, Bo Zhao and Songhuai Du
Batteries 2026, 12(4), 112; https://doi.org/10.3390/batteries12040112 - 25 Mar 2026
Abstract
With the rapid development of smart cities, coordinating diverse distributed energy resources through storage-centric shared management has become a critical challenge. This paper proposes a bi-level energy management framework to support peer-to-peer energy trading among multiple virtual power plants (VPPs) under multidimensional uncertainties. [...] Read more.
With the rapid development of smart cities, coordinating diverse distributed energy resources through storage-centric shared management has become a critical challenge. This paper proposes a bi-level energy management framework to support peer-to-peer energy trading among multiple virtual power plants (VPPs) under multidimensional uncertainties. The interaction is modeled as a Stackelberg–Nash equilibrium framework, in which OK, we will make the necessary revisions as per the requirements.a public energy storage operator and a natural gas company act as leaders to maximize social welfare and design differentiated trading strategies for VPPs. The VPPs act as followers and participate in cooperative energy trading based on a generalized Nash equilibrium scheme, sharing surplus energy and allocating cooperative benefits according to their contributions. To address uncertainty, Conditional Value at Risk (CVaR) is adopted to quantify the expected loss of the upper-level decision makers. The lower-level VPP problem is formulated as a three-stage stochastic robust optimization model considering renewable generation uncertainty. To solve the resulting nonlinear bi-level problem, a two-stage solution approach combining particle swarm optimization and KKT-based reformulation is developed to transform it into a tractable mixed-integer linear programming model. Numerical case studies verify the effectiveness of the proposed framework. Full article
(This article belongs to the Topic Smart Energy Systems, 2nd Edition)
Show Figures

Figure 1

33 pages, 17549 KB  
Article
HP1β and H3K9me3 Regulate Olfactory Receptor Choice and Transcriptional Identity
by Martín Escamilla-del-Arenal, Rachel Duffié, Hani Shayya, Valentina Loconte, Axel Ekman, Lena Street, Kevin Monahan, Carolyn Larabell, Marko Jovanovic and Stavros Lomvardas
Int. J. Mol. Sci. 2026, 27(7), 2958; https://doi.org/10.3390/ijms27072958 - 24 Mar 2026
Abstract
Diverse epigenetic regulatory mechanisms ensure and modulate cellular diversity. The histone 3 lysine 9 me3 (H3K9me3) post-translational modification participates in silencing lineage-inappropriate genes by restricting access of transcription factors and other regulatory proteins to genes that control cell fate. Mouse olfactory sensory neurons [...] Read more.
Diverse epigenetic regulatory mechanisms ensure and modulate cellular diversity. The histone 3 lysine 9 me3 (H3K9me3) post-translational modification participates in silencing lineage-inappropriate genes by restricting access of transcription factors and other regulatory proteins to genes that control cell fate. Mouse olfactory sensory neurons (OSNs) select one olfactory receptor (OR) gene out of 2600 possibilities. This monoallelic and stochastic OR choice occurs as OSNs differentiate and undergo dramatic changes in nuclear architecture. OR genes from different chromosomes converge into specialized nuclear bodies and chromatin compartments, as H3K9me3 and chromatin binding proteins including heterochromatin protein 1 (HP1) are incorporated. In this work, we have uncovered an unexpected role for HP1β in OR choice and neuronal identity that cannot be rescued by HP1α in vivo. With the use of a conditional knock-in mouse model, that after CRE expression replaces HP1β with HP1α, we observe changes in H3K9me3 levels and DNA accessibility over OR gene clusters. These changes alter the expression patterns that partition the mouse olfactory epithelium into five OR expression zones, which results in a reduced OR repertoire that leads to a loss of olfactory sensory neuron diversity. We propose that HP1β modulates the competition of OR promoters for enhancers to promote receptor diversity by establishing repression gradients in a zonal fashion. Full article
(This article belongs to the Special Issue Molecular and Cellular Mechanisms Underlying Taste and Smell)
Show Figures

Figure 1

22 pages, 3510 KB  
Article
Optimal Investment Strategy for Off-Grid Offshore Wind Hydrogen Production: Hybrid and Standalone PEM Electrolyzer Configuration Comparison
by Hanyi Lin, Qing Tong, Sheng Zhou and Cuiping Liao
Clean Technol. 2026, 8(2), 45; https://doi.org/10.3390/cleantechnol8020045 - 24 Mar 2026
Abstract
Developing far-offshore wind power integrated with hydrogen production represents a critical pathway for China’s energy decarbonization. However, the investment prospects of off-grid offshore wind-to-hydrogen projects remain highly uncertain due to volatile technology costs and hydrogen prices, complicating the evaluation of project value and [...] Read more.
Developing far-offshore wind power integrated with hydrogen production represents a critical pathway for China’s energy decarbonization. However, the investment prospects of off-grid offshore wind-to-hydrogen projects remain highly uncertain due to volatile technology costs and hydrogen prices, complicating the evaluation of project value and optimal timing. To address the oversimplified treatment of electrolyzer operation and the limited consideration of alkaline electrolyzers in the existing studies, this paper proposes an integrated assessment framework that combines time-series operational simulation with real options analysis. A detailed dynamic model of an alkaline (ALK)–proton exchange membrane (PEM) hybrid configuration is developed to simulate the coordinated hydrogen production under fluctuating wind power. Technical learning effects and stochastic hydrogen price processes are incorporated, and the least-squares Monte Carlo method is applied to determine the optimal investment strategies. A case study of a planned far-offshore wind farm in Guangdong indicates that, compared with a standalone PEM configuration, the hybrid configuration reduces the levelized hydrogen cost by about 15%, increases the investment value by up to 17 times under slow technological progress, and brings forward the optimal investment year by five years, from 2039 to 2034. Sensitivity analysis shows that expected hydrogen prices and discount rates dominate the investment outcomes. Full article
Show Figures

Figure 1

33 pages, 3399 KB  
Article
Micro-Scale Agent-Based Modeling of Hurricane Evacuation Under Compound Wind–Surge Hazards: A Case Study of Westbrook, Connecticut
by Omar Bustami, Francesco Rouhana, Alok Sharma, Wei Zhang and Amvrossios Bagtzoglou
Sustainability 2026, 18(7), 3182; https://doi.org/10.3390/su18073182 - 24 Mar 2026
Abstract
Hurricanes create compound hazards such as storm surge, flooding, and wind-driven debris that can degrade roadway capacity, fragment network connectivity, and hinder evacuation and shelter operations. From a sustainability perspective, improving evacuation planning is essential for reducing disaster-related losses, protecting vulnerable populations, and [...] Read more.
Hurricanes create compound hazards such as storm surge, flooding, and wind-driven debris that can degrade roadway capacity, fragment network connectivity, and hinder evacuation and shelter operations. From a sustainability perspective, improving evacuation planning is essential for reducing disaster-related losses, protecting vulnerable populations, and strengthening the resilience of coastal communities facing intensifying climate-driven hazards. This paper develops a micro-scale, agent-based evacuation modeling framework to assess evacuation performance under baseline and compound-hazard conditions, with emphasis on municipal decision support. The framework is demonstrated for Westbrook, Connecticut, at the census block-group scale in AnyLogic by integrating household locations, vehicle availability, road-network connectivity, and shelter capacities from publicly available datasets. Evacuation propensity and destination choice are parameterized using survey data, enabling empirically grounded decisions for in-town versus out-of-town evacuation among household-vehicle agents. Compound disruptions are represented through flood-related road closures derived from SLOSH storm-surge outputs and stochastic wind-related disruptions that dynamically constrain accessibility during the simulation. Scenarios are evaluated for Saffir–Simpson Category 1–2 and Category 3–4 hurricanes under baseline and compound conditions. Model outputs quantify normalized evacuation time, congestion and critical intersections, shelter demand and unmet capacity, evacuation failure, and spatial heterogeneity across block groups. Results indicate that compound flooding substantially increases evacuation times and failure rates, with the largest performance degradation concentrated in higher-vulnerability areas. Optimization experiments further compare the effectiveness of behavioral shifts, shelter-capacity expansion, and earlier departure timing in reducing delays and unmet shelter demand. Overall, the proposed framework provides transparent, reproducible, and scalable analytics that town engineers and emergency planners can use to evaluate evacuation readiness under compound hurricane impacts. Full article
(This article belongs to the Special Issue Sustainable Disaster Management and Community Resilience)
Show Figures

Figure 1

20 pages, 2661 KB  
Article
Forecasting Carbon Dioxide Emissions in Greece Under Decarbonization: Evidence from an ARIMA Time Series Model
by Tranoulidis Apostolos
World 2026, 7(4), 52; https://doi.org/10.3390/world7040052 - 24 Mar 2026
Abstract
Environmental protection and the reduction of carbon dioxide (CO2) emissions are central priorities within European climate policy. This study analyses and forecasts annual CO2 emissions in Greece using a univariate time-series framework. Annual data from 1960 to 2024, sourced from [...] Read more.
Environmental protection and the reduction of carbon dioxide (CO2) emissions are central priorities within European climate policy. This study analyses and forecasts annual CO2 emissions in Greece using a univariate time-series framework. Annual data from 1960 to 2024, sourced from Our World in Data, enable the analysis to capture both the historical expansion of emissions and the recent decarbonization phase of the Greek energy system. Using the Box–Jenkins methodology, multiple ARIMA specifications were evaluated based on information criteria and diagnostic tests. To examine the stationarity properties of the series, the Augmented Dickey–Fuller (ADF) unit root test is applied. The findings indicate that the ARIMA (1,1,1) model most accurately represents the stochastic dynamics of the emissions series. The estimated autoregressive and moving-average coefficients, 0.9404 and −0.7165, respectively, are statistically significant at the 1% level. Residual diagnostics confirm the absence of serial correlation, approximate normality, and no significant heteroskedasticity. Forecast evaluation for the 2020–2024 holdout period demonstrates satisfactory predictive performance, with a mean absolute percentage error (MAPE) of approximately 6%. Dynamic forecasts for 2025 to 2030 indicate a gradual decline in national CO2 emissions, reaching an estimated 45.5 million tonnes by 2030. Overall, the study demonstrates that parsimonious ARIMA models offer a transparent and empirically reliable benchmark for national emissions forecasting. These models provide a reproducible tool for monitoring climate policy outcomes and for supporting evidence-based environmental decision-making. This study contributes to the environmental forecasting literature by providing an updated, diagnostically rigorous univariate benchmark model for Greece’s CO2 emissions that encompasses both the pre- and post-decarbonization phases of the national energy transition. Full article
(This article belongs to the Section Climate Transitions and Ecological Solutions)
Show Figures

Figure 1

19 pages, 1015 KB  
Article
Smart Energy Management in Agricultural Wireless Sensor Nodes Using TinyML-Based Adaptive Sampling
by Adrian Hinostroza, Jimmy Tarrillo and Moises Nuñez
Sensors 2026, 26(7), 2014; https://doi.org/10.3390/s26072014 - 24 Mar 2026
Abstract
Smart sensors are increasingly used in agriculture to monitor environmental conditions and support data-driven decision-making. However, traditional sensor implementations face critical challenges related to power consumption, especially in remote farms—such as pitaya plantations—where access to electricity and ongoing maintenance is limited. This paper [...] Read more.
Smart sensors are increasingly used in agriculture to monitor environmental conditions and support data-driven decision-making. However, traditional sensor implementations face critical challenges related to power consumption, especially in remote farms—such as pitaya plantations—where access to electricity and ongoing maintenance is limited. This paper presents a smart energy management system for agricultural sensor nodes integrating a machine learning model for adaptive sampling and a batching strategy to optimize energy usage. A lightweight Stochastic Gradient Descent (SGD) regressor trained on temperature dynamics runs on-device to predict the sampling interval (Ts). In parallel, the node adjusts the number of buffered samples as the battery state of charge (SOC) decreases, reducing Long Range (LoRa) transmissions. Field experiments show that the proposed approach reduces energy consumption by 77.8% compared with fixed-interval sampling, while maintaining good temperature fidelity with Mean Absolute Error (MAE) of 0.537 °C for temperature reconstruction. Full article
(This article belongs to the Special Issue Sensing and Machine Learning in Autonomous Agriculture)
Show Figures

Figure 1

24 pages, 399 KB  
Article
Branching Random Walks with Ageing
by Daniela Bertacchi, Elena Montanaro and Fabio Zucca
Mathematics 2026, 14(6), 1088; https://doi.org/10.3390/math14061088 - 23 Mar 2026
Abstract
Branching processes are stochastic models describing the evolution of populations in which individuals reproduce and die independently over time. In the classical setting, an individual’s reproductive capacity is fixed throughout its lifetime. However, in real-world situations, fertility typically rises during a juvenile phase, [...] Read more.
Branching processes are stochastic models describing the evolution of populations in which individuals reproduce and die independently over time. In the classical setting, an individual’s reproductive capacity is fixed throughout its lifetime. However, in real-world situations, fertility typically rises during a juvenile phase, peaks at maturity, and subsequently declines. In order to capture this feature, we introduce a branching random walk with ageing, as an extension of the classical branching random walk, by assigning each individual an age-dependent reproductive rate. Our model differs from classical age-dependent processes such as the Bellman–Harris model, where the remaining lifespan depends on age, while the rate of reproduction is fixed within that lifetime. As in the classical case, branching random walks with ageing are parametrised by λ>0, which tunes the reproductive speed and may be seen as a characteristic of the population. The thresholds of λ separating extinction and survival are the global and local critical parameters. We characterise the value of the local critical parameter and provide a lower bound for the global critical parameter. We identify a class of ageing branching random walks for which this lower bound coincides with the global critical parameter. We study how local modifications to the reproduction and ageing rates may change the critical parameters. This is of practical interest: in species preservation, one may want to lower the critical parameters, so that λ exceeds them, and there is a positive probability of survival. On the other hand, in epidemic control, the goal is to increase the critical parameters, since if λ is below them, then the epidemic is eventually going to disappear. We compute the expected number of individuals alive in a branching process with ageing and show that, contrary to the behaviour of classical branching processes, it may exhibit an initial growth even when the population is ultimately destined for extinction. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

46 pages, 7683 KB  
Article
Node Symmetry Analysis as an Early Indicator of Locational Marginal Price Growth in Network-Constrained Power Systems with High Renewable Penetration
by Inga Zicmane, Sergejs Kovalenko, Aleksandrs Sahnovskis, Roman Petrichenko and Gatis Junghans
Symmetry 2026, 18(3), 547; https://doi.org/10.3390/sym18030547 - 23 Mar 2026
Abstract
The reconstruction of nodal prices and generation patterns in electricity markets with network constraints constitutes a challenging inverse analysis problem due to congestion-induced non-uniqueness and limited observability. This study introduces node symmetry analysis as a novel early indicator of locational marginal price (LMP) [...] Read more.
The reconstruction of nodal prices and generation patterns in electricity markets with network constraints constitutes a challenging inverse analysis problem due to congestion-induced non-uniqueness and limited observability. This study introduces node symmetry analysis as a novel early indicator of locational marginal price (LMP) growth in power systems with high renewable energy penetration. Symmetric nodes, defined as nodes with identical generation cost structures and comparable network topology, exhibit near-identical price signals under uncongested conditions. In this study, the term “price” refers to the LMP obtained from the DC-OPF market-clearing model under scenarios with high renewable energy penetration. Deviations from this symmetry, quantified through price differences between symmetric node pairs (ΔLMP), serve as sensitive indicators of emerging network stress and congestion, providing early warning of peak-price events. Using DC power flow sensitivities and congestion indicators, LMPs are reconstructed in a simplified five-node test system under three scenarios: baseline operation, severe transmission congestion, and high renewable generation variability. Results show strong correlations between symmetry violations and system-wide price increases. In congested scenarios, ΔLMP exceeding €2/MWh consistently precedes peak prices by 1–2 h, demonstrating the metric’s predictive capability. Integration of storage further highlights the operational value of symmetry-based analysis, showing reductions in curtailed renewable generation and peak prices. The proposed framework offers a computationally efficient and interpretable tool for congestion diagnosis, price trend forecasting, and inverse market analysis, with potential scalability to larger AC networks and stochastic scenarios. These findings provide actionable insights for system operators, market participants, and regulators seeking to enhance flexibility, reliability, and economic efficiency in high-renewable electricity markets. Full article
Show Figures

Figure 1

32 pages, 31110 KB  
Article
Explicit Features Versus Implicit Spatial Relations in Geomorphometry: A Comparative Analysis for DEM Error Correction in Complex Geomorphological Regions
by Shuyu Zhou, Mingli Xie, Nengpan Ju, Changyun Feng, Qinghua Lin and Zihao Shu
Sensors 2026, 26(6), 1995; https://doi.org/10.3390/s26061995 - 23 Mar 2026
Viewed by 70
Abstract
Global Digital Elevation Models (DEMs) exhibit systematic biases constrained by acquisition geometry and surface penetration. This study aims to evaluate whether the increasing complexity of geometric deep learning (e.g., Graph Neural Networks, GNNs) is justified by performance gains over established feature engineering paradigms [...] Read more.
Global Digital Elevation Models (DEMs) exhibit systematic biases constrained by acquisition geometry and surface penetration. This study aims to evaluate whether the increasing complexity of geometric deep learning (e.g., Graph Neural Networks, GNNs) is justified by performance gains over established feature engineering paradigms (e.g., XGBoost) under the constraints of sparse altimetry supervision. We established a rigorous comparative framework across four mainstream products—ALOS World 3D, Copernicus DEM, SRTM GL1, and TanDEM-X—using Sichuan Province, China, as a representative natural laboratory. Our results reveal a fundamental scale mismatch (where the ~485 m average spacing of sampled altimetry footprints dwarfs the local terrain resolution): despite their topological complexity, Hybrid GNN models fail to establish a statistically significant accuracy advantage over the systematically optimized XGBoost baseline, demonstrating RMSE parity. Mechanistically, we uncover a critical divergence in decision logic: XGBoost relies on a stable “Physics Skeleton” consistently dominated by deterministic features (terrain aspect and vegetation density), whereas GNNs exhibit severe “Attribution Stochasticity” (ρ  0.63–0.77). The GNN component acts as a residual-dependent latent feature learner rather than discovering universal topological laws. We conclude that for geospatial regression tasks relying on sparse supervision, “Physics Trumps Geometry.” A “Feature-First” paradigm that prioritizes robust, domain-knowledge-based physical descriptors outweighs the indeterminate complexity of “Black Box” architectures. This study underscores the imperative of prioritizing explanatory stability over marginal accuracy gains to foster trusted Geo-AI. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop