Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (179)

Search Parameters:
Keywords = probabilistic graphs

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 1128 KB  
Article
Reconstruction of DNA Sequences Through Eulerian Traversal of De Bruijn Graphs
by Baining Zhu, Siqi Liu and Suwei Liu
Mathematics 2026, 14(5), 832; https://doi.org/10.3390/math14050832 (registering DOI) - 28 Feb 2026
Abstract
Reconstructing a genome from collections of short DNA fragments is a fundamental problem in modern sequencing. Although genome assembly algorithms are widely used in practice, the mathematical conditions that allow exact reconstruction are not always clear. This study develops a graph-theoretic framework for [...] Read more.
Reconstructing a genome from collections of short DNA fragments is a fundamental problem in modern sequencing. Although genome assembly algorithms are widely used in practice, the mathematical conditions that allow exact reconstruction are not always clear. This study develops a graph-theoretic framework for genome reconstruction using De Bruijn graphs and Eulerian paths in an idealized, error-free setting. Each k-mer is represented as a directed edge connecting its (k1)-length prefix and suffix. The resulting overlap graph is constructed using a balanced search tree and traversed with a stack-based Eulerian algorithm. Numerical experiments over a broad range of genome lengths and fragment lengths reveal a sharp transition in reconstruction accuracy. This transition is explained by a probabilistic model for prefix collisions in the directed graph. The theoretical predictions agree with simulation results and provide conditions on the fragment length required for reliable reconstruction. These results show that the difficulty of genome assembly is governed primarily by the combinatorial structure of the underlying graph rather than by algorithmic heuristics. Full article
Show Figures

Figure 1

19 pages, 3565 KB  
Article
Short-Term Demand Forecasting and Supply Assurance Evaluation for Natural Gas Pipeline Networks Based on Uncertainty Quantification and Deep Learning
by Jinghua Chen, Yuxuan He, Qi Xiang, Haiyang You, Weican Wang, Pengcheng Li, Zhiwei Zhao, Zhaoming Yang, Huai Su and Jinjun Zhang
Energies 2026, 19(4), 1101; https://doi.org/10.3390/en19041101 - 22 Feb 2026
Viewed by 221
Abstract
Natural gas pipeline networks are subject to supply instability due to random fluctuations. Current forecasting methodologies often suffer from limited accuracy, inadequate uncertainty quantification, and poor integration with dynamic network evaluation mechanisms. To address these challenges, this study presents an integrated framework that [...] Read more.
Natural gas pipeline networks are subject to supply instability due to random fluctuations. Current forecasting methodologies often suffer from limited accuracy, inadequate uncertainty quantification, and poor integration with dynamic network evaluation mechanisms. To address these challenges, this study presents an integrated framework that bridges short-term demand forecasting with supply assurance assessment. A deep learning model that combines a graph convolutional network and a bidirectional long short-term memory network is developed to produce accurate 72 h demand forecasts. Forecasting uncertainty is quantified using the cumulative distribution function. Based on the probabilistic forecasts, a supply assurance evaluation model is constructed that accounts for the dynamic regulation capability of line pack. The comprehensive indicator system incorporates key metrics such as user satisfaction and the line pack demand−storage ratio. A case study was conducted with the proposed method based on a regional real-world pipeline network. The results demonstrate that the proposed model outperforms conventional baselines, achieving a mean absolute percentage error of less than 1%. The uncertainty quantification captures the risk probability associated with demand fluctuations. The proposed evaluation method identifies vulnerable sections and assesses supply margins under various scenarios, thus providing effective decision support for operational scheduling and supply assurance. Full article
(This article belongs to the Topic Oil and Gas Pipeline Network for Industrial Applications)
Show Figures

Figure 1

48 pages, 3619 KB  
Article
Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods
by Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev and Antonina I. Karlina
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723 - 19 Feb 2026
Viewed by 183
Abstract
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, [...] Read more.
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints. Full article
Show Figures

Figure 1

42 pages, 2537 KB  
Article
UPSET: A Comprehensive Probabilistic Single Event Transient Analysis Flow for VLSI Circuits Using Static Timing Analysis
by Christos Georgakidis, Dimitris Valiantzas, Nikolaos Chatzivangelis, Marko Andjelkovic, Christos Sotiriou and Milos Krstic
Electronics 2026, 15(4), 818; https://doi.org/10.3390/electronics15040818 - 13 Feb 2026
Viewed by 165
Abstract
The downscaling of VLSI technologies has exacerbated the susceptibility of integrated circuits (ICs) to radiation-induced Single-Event Transients (SETs). This work presents UPSET, a comprehensive and technology-independent EDA framework for probabilistic SET analysis using Static Timing Analysis (STA). Unlike traditional simulation-based methods that suffer [...] Read more.
The downscaling of VLSI technologies has exacerbated the susceptibility of integrated circuits (ICs) to radiation-induced Single-Event Transients (SETs). This work presents UPSET, a comprehensive and technology-independent EDA framework for probabilistic SET analysis using Static Timing Analysis (STA). Unlike traditional simulation-based methods that suffer from prohibitive runtimes, UPSET leverages graph-based propagation with advanced logical, electrical, and timing-window masking models to evaluate circuit sensitivity efficiently. Key contributions include a novel “Electrical Masking Window” (EMW) criterion that effectively filters non-full-rail pulses early in reconvergent logic and a TimeStamp-based propagation mode that accurately handles complex signal reconvergence with Boolean evaluation. The experimental results over some featured benchmarks demonstrate a speedup of more than 25,000× compared with SPICE while maintaining a tight 4.56% error bound in pulse width estimation. Moreover, experimental validation on 50 benchmarks across varying complexities showcases that EMW enhancement reduces the pessimism to circuit sensitivity by up to 25% on average, providing tighter upper bounds while maintaining scalability to million-gate designs. By integrating seamlessly with standard industrial formats (LEF, DEF, LIB, or SPEF), UPSET enables scalable, accurate soft SET sensitivity assessment for modern digital designs, establishing a robust foundation for automated radiation hardening flows. Full article
Show Figures

Figure 1

29 pages, 2553 KB  
Article
Adaptive Path Planning for Autonomous Underwater Vehicle (AUV) Based on Spatio-Temporal Graph Neural Networks and Conditional Normalizing Flow Probabilistic Reconstruction
by Guoshuai Li, Jinghua Wang, Jichuan Dai, Tian Zhao, Danqiang Chen and Cui Chen
Algorithms 2026, 19(2), 147; https://doi.org/10.3390/a19020147 - 11 Feb 2026
Viewed by 197
Abstract
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made [...] Read more.
In underwater reconnaissance and patrol, AUV has to sense and judge traversability in cluttered areas that include reefs, cliffs, and seabed infrastructure. A narrow sonar field of view, occlusion, and current-driven disturbances leave the vehicle with local, time-varying information, so decisions are made with incomplete and uncertain observations. A path-planning framework is built around two coupled components: spatiotemporal graph neural network prediction and conditional normalizing flow (CNF)-based probabilistic environment reconstruction. Forward-looking sonar and inertial navigation system (INS) measurements are fused online to form a local environment graph with temporal encoding. Cross-temporal message passing captures how occupancy and maneuver patterns evolve, which supports path prediction under dynamic reachability and collision-avoidance constraints. For regions that remain unobserved, CNF performs conditional generation from the available local observations, producing probabilistic completion and an explicit uncertainty output. Conformal calibration then maps model confidence to credible intervals with controlled miscoverage, giving a consistent probabilistic interface for risk budgeting. To keep pace with ocean currents and moving targets, edge weights and graph connectivity are updated online as new observations arrive. Compared with Informed Random Tree star (RRT*), D* Lite, Soft Actor-Critic (SAC), and Graph Neural Network-Probabilistic Roadmap (GNN-PRM), the proposed method achieves a near 100% success rate at 20% occlusion and maintains about an 80% success rate even under 70% occlusion. In dynamic obstacle scenarios, it yields about a 4% collision rate at low speeds and keeps the collision rate below 20% when obstacle speed increases to 3 m/s. Ablation studies further demonstrate that temporal modeling improves success rate by about 7.1%, CNF-based probabilistic completion boosts success rate by about 13.2% and reduces collisions by about 17%, while conformal calibration reduces coverage error by about 6.6%, confirming robust planning under heavy occlusion and time-varying uncertainty. Full article
Show Figures

Figure 1

21 pages, 734 KB  
Article
Hybrid Deep Learning Model for EI-MS Spectra Prediction
by Bartosz Majewski and Marta Łabuda
Int. J. Mol. Sci. 2026, 27(3), 1588; https://doi.org/10.3390/ijms27031588 - 5 Feb 2026
Viewed by 308
Abstract
Electron ionization (EI) mass spectrometry (MS) is a widely used technique for the compound identification and production of spectra. However, incomplete coverage of reference spectral libraries limits reliable analysis of newly characterized molecules. This study presents a hybrid deep learning model for predicting [...] Read more.
Electron ionization (EI) mass spectrometry (MS) is a widely used technique for the compound identification and production of spectra. However, incomplete coverage of reference spectral libraries limits reliable analysis of newly characterized molecules. This study presents a hybrid deep learning model for predicting EI-MS spectra directly from molecular structure. The approach combines a graph neural network encoder with a residual neural network decoder, followed by refinement using cross-attention, bidirectional prediction, and probabilistic, chemistry-informed masks. Trained on the NIST14 EI-MS database (≤500 Da), the model achieves strong library matching performance (Recall@10 ≈ 80.8%) and high spectral similarity. The proposed hybrid GNN (Graph Neural Network)-ResNet (Residual Neural Network) model can generate high-quality synthetic EI-MS spectra to supplement existing libraries, potentially reducing the cost and effort of experimental spectrum acquisition. The obtained results demonstrate the potential of data-driven models to augment EI-MS libraries, while highlighting remaining challenges in generalization and spectral uniqueness. Full article
Show Figures

Graphical abstract

32 pages, 53691 KB  
Article
Underwater SLAM and Calibration with a 3D Profiling Sonar
by António Ferreira, José Almeida, Aníbal Matos and Eduardo Silva
Remote Sens. 2026, 18(3), 524; https://doi.org/10.3390/rs18030524 - 5 Feb 2026
Viewed by 275
Abstract
High resolution underwater mapping is fundamental to the sustainable development of the blue economy, supporting offshore energy expansion, marine habitat protection, and the monitoring of both living and non-living resources. This work presents a pose-graph SLAM and calibration framework specifically designed for 3D [...] Read more.
High resolution underwater mapping is fundamental to the sustainable development of the blue economy, supporting offshore energy expansion, marine habitat protection, and the monitoring of both living and non-living resources. This work presents a pose-graph SLAM and calibration framework specifically designed for 3D profiling sonars, such as the Coda Octopus Echoscope 3D. The system integrates a probabilistic scan matching method (3DupIC) for direct registration of 3D sonar scans, enabling accurate trajectory and map estimation even under degraded dead reckoning conditions. Unlike other bathymetric SLAM methods that rely on submaps and assume short-term localization accuracy, the proposed approach performs direct scan-to-scan registration, removing this dependency. The factor graph is extended to represent the sonar extrinsic parameters, allowing the sonar-to-body transformation to be refined jointly with trajectory optimization. Experimental validation on a challenging real world dataset demonstrates outstanding localization and mapping performance. The use of refined extrinsic parameters further improves both accuracy and map consistency, confirming the effectiveness of the proposed joint SLAM and calibration approach for robust and consistent underwater mapping. Full article
(This article belongs to the Special Issue Underwater Remote Sensing: Status, New Challenges and Opportunities)
Show Figures

Figure 1

18 pages, 1357 KB  
Article
Zero-Inflated Data Analysis Using Graph Neural Networks with Convolution
by Sunghae Jun
Computers 2026, 15(2), 104; https://doi.org/10.3390/computers15020104 - 2 Feb 2026
Viewed by 228
Abstract
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most [...] Read more.
Zero-inflated count data are characterized by an excessive frequency of zeros that cannot be adequately analyzed by a single distribution, such as Poisson or negative binomial. This problem is pervasive in many practical applications, including document–keyword matrix derived from text corpora, where most keyword frequencies are zero. Conventional statistical approaches, such as the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, explicitly separate a structural zero component from a count component, but they typically assume independent observations and can be unstable when covariates are high-dimensional and sparse. To address these limitations, this paper proposes a graph-based zero-inflated learning framework that combines simple graph convolution (SGC) with zero-inflated count regression heads such as ZIP and ZINB. We first construct an observation graph by connecting similar samples, and then apply SGC to propagate and smooth features over the graph, producing convolutional representations that incorporate neighborhood information while remaining computationally lightweight. The resulting representations are used as covariates in ZIP and ZINB heads, which preserve probabilistic interpretability through maximum likelihood learning. Our experiments on simulated zero-inflated datasets with controlled zero ratios demonstrate that the proposed ZIP+SGC and ZINB+SGC consistently reduce prediction errors compared with their non-graph baselines, as measured by mean absolute error and root mean squared error. Overall, the proposed approach provides an efficient and interpretable way to integrate graph neural computation with zero-inflated modeling for sparse count prediction problems. Full article
Show Figures

Figure 1

15 pages, 1832 KB  
Article
Learning Structural Relations for Robust Chest X-Ray Landmark Detection
by Su-Bin Choi, Gyu-Sung Ham and Kanghan Oh
Electronics 2026, 15(3), 589; https://doi.org/10.3390/electronics15030589 - 29 Jan 2026
Viewed by 335
Abstract
Accurate anatomical landmark localization is essential to automate chest X-ray analysis and improve diagnostic reliability. While global context recognition is essential in medical imaging, the inherently high-resolution nature of these images has long made this task particularly difficult. While the U-Net-based heatmap regression [...] Read more.
Accurate anatomical landmark localization is essential to automate chest X-ray analysis and improve diagnostic reliability. While global context recognition is essential in medical imaging, the inherently high-resolution nature of these images has long made this task particularly difficult. While the U-Net-based heatmap regression methods show strong performance, they still lack explicit modeling of the global spatial relationships among landmarks. To address this limitation, we propose an integrated structural learning framework that captures anatomical correlations across landmarks. The model generates probabilistic heatmaps with U-Net and derives continuous coordinates via soft-argmax. Subsequently, these coordinates, along with their corresponding local feature vectors, are fed into a Graph Neural Network (GNN) to refine the final positions by learning inter-landmark dependencies. Anatomical priors, such as bilateral symmetry and vertical hierarchy, are incorporated into the loss function to enhance spatial consistency. The experimental results show that our method consistently outperforms state-of-the-art models across all metrics, achieving significant improvements in MRE and SDR at 3, 6, and 9 pxl thresholds. This high precision demonstrates the framework’s strong potential to enhance the accuracy and robustness of clinical diagnostic systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

20 pages, 7566 KB  
Article
Temporal Probability-Guided Graph Topology Learning for Robust 3D Human Mesh Reconstruction
by Hongsheng Wang, Jie Yang, Feng Lin and Fei Wu
Mathematics 2026, 14(2), 367; https://doi.org/10.3390/math14020367 - 21 Jan 2026
Viewed by 222
Abstract
Reconstructing 3D human motion from monocular video presents challenges when frames contain occlusions or blur, as conventional approaches depend on features extracted within limited temporal windows, resulting in structural distortions. In this paper, we introduce a novel framework that combines temporal probability guidance [...] Read more.
Reconstructing 3D human motion from monocular video presents challenges when frames contain occlusions or blur, as conventional approaches depend on features extracted within limited temporal windows, resulting in structural distortions. In this paper, we introduce a novel framework that combines temporal probability guidance with graph topology learning to achieve robust 3D human mesh reconstruction from incomplete observations. Our method leverages topology-aware probability distributions spanning entire motion sequences to recover missing anatomical regions. The Graph Topological Modeling (GTM) component captures structural relationships among body parts by learning the inherent connectivity patterns in human anatomy. Building upon GTM, our Temporal-alignable Probability Distribution (TPDist) mechanism predicts missing features through probabilistic inference, establishing temporal coherence across frames. Additionally, we propose a Hierarchical Human Loss (HHLoss) that hierarchically regularizes probability distribution errors for inter-frame features while accounting for topological variations. Experimental validation demonstrates that our approach outperforms state-of-the-art methods on the 3DPW benchmark, particularly excelling in scenarios involving occlusions and motion blur. Full article
Show Figures

Figure 1

19 pages, 1944 KB  
Article
Research on Adaptive Cooperative Positioning Algorithm for Underwater Robots Based on Dolphin Group Cooperative Mechanism
by Shiwei Fan, Jiachong Chang, Zicheng Wang, Mingfeng Ding, Hongchao Sun and Yubo Zhao
Biomimetics 2026, 11(1), 82; https://doi.org/10.3390/biomimetics11010082 - 20 Jan 2026
Viewed by 284
Abstract
Inspired by the remarkable collaborative echolocation mechanisms of dolphin pods, the paper addresses the challenge of achieving high-precision cooperative positioning for clusters of unmanned underwater vehicles (UUVs) in complex marine environments. Cooperative positioning systems for UUVs typically rely on acoustic ranging information to [...] Read more.
Inspired by the remarkable collaborative echolocation mechanisms of dolphin pods, the paper addresses the challenge of achieving high-precision cooperative positioning for clusters of unmanned underwater vehicles (UUVs) in complex marine environments. Cooperative positioning systems for UUVs typically rely on acoustic ranging information to correct positional errors. However, the propagation characteristics of underwater acoustic signals are susceptible to environmental disturbances, often resulting in non-Gaussian, heavy-tailed distributions of ranging noise. Additionally, the strong nonlinearity of the system and the limited observability of measurement information further constrain positioning accuracy. To tackle these issues, this paper innovatively proposes a Factor Graph-based Adaptive Cooperative Positioning Algorithm (FGAWSP) suitable for heavy-tailed noise environments. The method begins by constructing a factor graph model for UUV cooperative positioning to intuitively represent the probabilistic dependencies between system states and observed variables. Subsequently, a novel factor graph estimation mechanism integrating adaptive weights with the product algorithm is designed. By conducting online assessment of residual information, this mechanism dynamically adjusts the fusion weights of different measurements, thereby achieving robust handling of anomalous range values. Experimental results demonstrate that the proposed method reduces positioning errors by 22.31% compared to the traditional algorithm, validating the effectiveness of our approach. Full article
(This article belongs to the Special Issue Bioinspired Robot Sensing and Navigation)
Show Figures

Figure 1

24 pages, 13052 KB  
Article
FGO-PMB: A Factor Graph Optimized Poisson Multi-Bernoulli Filter for Accurate Online 3D Multi-Object Tracking
by Jingyi Jin, Jindong Zhang, Yiming Wang and Yitong Liu
Sensors 2026, 26(2), 591; https://doi.org/10.3390/s26020591 - 15 Jan 2026
Viewed by 279
Abstract
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, [...] Read more.
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, a unified probabilistic framework that integrates the Poisson Multi-Bernoulli (PMB) filter from Random Finite Set (RFS) theory with Factor Graph Optimization (FGO) for robust LiDAR-based object tracking. In the proposed framework, object states, existence probabilities, and association weights are jointly formulated as optimizable variables within a factor graph. Four factors, including state transition, observation, existence, and association consistency, are formulated to uniformly encode the spatio-temporal constraints among these variables. By unifying the uncertainty modeling capability of RFS with the global optimization strength of FGO, the proposed framework achieves temporally consistent and uncertainty-aware estimation across continuous LiDAR scans. Experiments on KITTI and nuScenes indicate that the proposed method achieves competitive 3D MOT accuracy while maintaining real-time performance. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 6454 KB  
Article
Probabilistic Photovoltaic Power Forecasting with Reliable Uncertainty Quantification via Multi-Scale Temporal–Spatial Attention and Conformalized Quantile Regression
by Guanghu Wang, Yan Zhou, Yan Yan, Zhihan Zhou, Zikang Yang, Litao Dai and Junpeng Huang
Sustainability 2026, 18(2), 739; https://doi.org/10.3390/su18020739 - 11 Jan 2026
Cited by 1 | Viewed by 362
Abstract
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting [...] Read more.
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting framework based on a Multi-scale Temporal–Spatial Attention Quantile Regression Network (MTSA-QRN) and an adaptive calibration mechanism to enhance uncertainty quantification and ensure statistically reliable prediction intervals. The framework employs a dual-pathway architecture: a temporal pathway combining Temporal Convolutional Networks (TCN) and multi-head self-attention to capture hierarchical temporal dependencies, and a spatial pathway based on Graph Attention Networks (GAT) to model nonlinear meteorological correlations. A learnable gated fusion mechanism adaptively integrates temporal–spatial representations, and weather-adaptive modules enhance robustness under diverse atmospheric conditions. Multi-quantile prediction intervals are calibrated using conformalized quantile regression to ensure reliable uncertainty coverage. Experiments on a real-world PV dataset (15 min resolution) demonstrate that the proposed method offers more accurate and sharper uncertainty estimates than competitive benchmarks, supporting risk-aware operational decision-making in power systems. Quantitative evaluation on a real-world 40 MW photovoltaic plant demonstrates that the proposed MTSA-QRN achieves a CRPS of 0.0400 before calibration, representing an improvement of over 55% compared with representative deep learning baselines such as Quantile-GRU, Quantile-LSTM, and Quantile-Transformer. After adaptive calibration, the proposed method attains a reliable empirical coverage close to the nominal level (PICP90 = 0.9053), indicating effective uncertainty calibration. Although the calibrated prediction intervals become wider, the model maintains a competitive CRPS value (0.0453), striking a favorable balance between reliability and probabilistic accuracy. These results demonstrate the effectiveness of the proposed framework for reliable probabilistic photovoltaic power forecasting. Full article
(This article belongs to the Topic Sustainable Energy Systems)
Show Figures

Figure 1

22 pages, 416 KB  
Review
A Roadmap of Mathematical Optimization for Visual SLAM in Dynamic Environments
by Hui Zhang, Xuerong Zhao, Ruixue Luo, Ziyu Wang, Gang Wang and Kang An
Mathematics 2026, 14(2), 264; https://doi.org/10.3390/math14020264 - 9 Jan 2026
Viewed by 540
Abstract
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation [...] Read more.
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation into the mathematical foundations of V-SLAM and systematically analyzes the key optimization techniques developed for dynamic environments, with particular emphasis on advances since 2020. We begin by rigorously deriving the probabilistic formulation of V-SLAM and its basis in nonlinear optimization, unifying it under a Maximum a Posteriori (MAP) estimation framework. We then propose a taxonomy based on how dynamic elements are handled mathematically, which reflects the historical evolution from robust estimation to semantic modeling and then to deep learning. This framework provides detailed analysis of three main categories: (1) robust estimation theory-based methods for outlier rejection, elaborating on the mathematical models of M-estimators and switch variables; (2) semantic information and factor graph-based methods for explicit dynamic object modeling, deriving the joint optimization formulation for multi-object tracking and SLAM; and (3) deep learning-based end-to-end optimization methods, discussing their mathematical foundations and interpretability challenges. This paper delves into the mathematical principles, performance boundaries, and theoretical controversies underlying these approaches, concluding with a summary of future research directions informed by the latest developments in the field. The review aims to provide both a solid mathematical foundation for understanding current dynamic V-SLAM techniques and inspiration for future algorithmic innovations. By adopting a math-first perspective and organizing the field through its core optimization paradigms, this work offers a clarifying framework for both understanding and advancing dynamic V-SLAM. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

28 pages, 1123 KB  
Article
Trust as a Stochastic Phase on Hierarchical Networks: Social Learning, Degenerate Diffusion, and Noise-Induced Bistability
by Dimitri Volchenkov, Nuwanthika Karunathilaka, Vichithra Amunugama Walawwe and Fahad Mostafa
Dynamics 2026, 6(1), 4; https://doi.org/10.3390/dynamics6010004 - 7 Jan 2026
Viewed by 467
Abstract
Empirical debates about a “crisis of trust” highlight long-lived pockets of high trust and deep distrust in institutions, as well as abrupt, shock-induced shifts between the two. We propose a probabilistic model in which such phenomena emerge endogenously from social learning on hierarchical [...] Read more.
Empirical debates about a “crisis of trust” highlight long-lived pockets of high trust and deep distrust in institutions, as well as abrupt, shock-induced shifts between the two. We propose a probabilistic model in which such phenomena emerge endogenously from social learning on hierarchical networks. Starting from a discrete model on a directed acyclic graph, where each agent makes a binary adoption decision about a single assertion, we derive an effective influence kernel that maps individual priors to stationary adoption probabilities. A continuum limit along hierarchical depth yields a degenerate, non-conservative logistic–diffusion equation for the adoption probability u(x,t), in which diffusion is modulated by (1u) and increases the integral of u rather than preserving it. To account for micro-level uncertainty, we perturb these dynamics by multiplicative Stratonovich noise with amplitude proportional to u(1u), strongest in internally polarised layers and vanishing at consensus. At the level of a single depth layer, Stratonovich–Itô conversion and Fokker–Planck analysis show that the noise induces an effective double-well potential with two robust stochastic phases, u0 and u1, corresponding to persistent distrust and trust. Coupled along depth, this local bistability and degenerate diffusion generate extended domains of trust and distrust separated by fronts, as well as rare, Kramers-type transitions between them. We also formulate the associated stochastic partial differential equation in Martin–Siggia–Rose–Janssen–De Dominicis form, providing a field-theoretic basis for future large-deviation and data-informed analyses of trust landscapes in hierarchical societies. Full article
Show Figures

Graphical abstract

Back to TopTop