Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (319)

Search Parameters:
Keywords = uncertainty graph

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1672 KB  
Article
Robust Stochastic Power Allocation for Industrial IoT Federated Learning with Neurosymbolic AI
by Pratik Goswami, Adeel Iqbal and Kwonhue Choi
Mathematics 2026, 14(3), 547; https://doi.org/10.3390/math14030547 - 3 Feb 2026
Abstract
In this work, a robust optimization approach for energy-aware federated learning (FL) in industrial IoT networks is proposed that addresses uncertainties in harvested energy, device failures, and dynamic topologies. The proposed neurosymbolic reasoning approach combines graph neural networks (GNNs) for topology-aware power prediction [...] Read more.
In this work, a robust optimization approach for energy-aware federated learning (FL) in industrial IoT networks is proposed that addresses uncertainties in harvested energy, device failures, and dynamic topologies. The proposed neurosymbolic reasoning approach combines graph neural networks (GNNs) for topology-aware power prediction with symbolic rules to solve the stochastic power allocation problem, providing both optimality guarantees and explainable safety-critical decisions. The hierarchical Master-Coordination-Task Agent (MA-CoA-TA) architecture prioritizes critical industrial nodes while ensuring FL convergence under energy constraints. This work establishes approximation guarantees through theoretical analysis relative to the robust optimum and validates with rigorous simulations against existing methods. Experimental results demonstrate that proposed framework provides optimal balance for robust FL deployment in large-scale IIoT networks with real-world uncertainties by achieving 5.7% FL accuracy with 151 J remaining battery under the most challenging conditions (100 rounds, 200 devices), while baselines fail completely (0% accuracy, battery depletion). Ablation confirms component synergy—symbolic reasoning delivers 2.2 times accuracy over GNN-only, while GNN+harvesting preserves 30 times more battery than symbolic-only. Full article
Show Figures

Figure 1

32 pages, 2526 KB  
Article
HSE-GNN-CP: Spatiotemporal Teleconnection Modeling and Conformalized Uncertainty Quantification for Global Crop Yield Forecasting
by Salman Mahmood, Raza Hasan and Shakeel Ahmad
Information 2026, 17(2), 141; https://doi.org/10.3390/info17020141 - 1 Feb 2026
Viewed by 188
Abstract
Global food security faces escalating threats from climate variability and resource constraints. Accurate crop yield forecasting is essential; however, existing methods frequently overlook complex spatial dependencies driven by climate teleconnections, such as the ENSO, and lacks rigorous uncertainty quantification. This paper presents HSE-GNN-CP, [...] Read more.
Global food security faces escalating threats from climate variability and resource constraints. Accurate crop yield forecasting is essential; however, existing methods frequently overlook complex spatial dependencies driven by climate teleconnections, such as the ENSO, and lacks rigorous uncertainty quantification. This paper presents HSE-GNN-CP, a novel framework integrating heterogeneous stacked ensembles, graph neural networks (GNNs), and conformal prediction (CP). Domain-specific features are engineered, including growing degree days and climate suitability scores, and explicitly model spatial patterns via rainfall correlation graphs. The ensemble combines random forest and gradient boosting learners with bootstrap aggregation, while GNNs encode inter-regional climate dependencies. Conformalized quantile regression ensures statistically valid prediction intervals. Evaluated on a global dataset spanning 15 countries and six major crops from 1990 to 2023, the framework achieves an R2 of 0.9594 and an RMSE of 4882 hg/ha. Crucially, it delivers calibrated 80% prediction intervals with 80.72% empirical coverage, significantly outperforming uncalibrated baselines at 40.03%. SHAP analysis identifies crop type and rainfall as dominant predictors, while the integrated drought classifier achieves perfect accuracy. These contributions advance agricultural AI by merging robust ensemble learning with explicit teleconnection modeling and trustworthy uncertainty quantification. Full article
Show Figures

Graphical abstract

22 pages, 31480 KB  
Article
Bayesian Inference of Primordial Magnetic Field Parameters from CMB with Spherical Graph Neural Networks
by Juan Alejandro PintoCastro, Héctor J. Hortúa, Jorge Enrique García-Farieta and Roger Anderson Hurtado
Universe 2026, 12(2), 34; https://doi.org/10.3390/universe12020034 - 26 Jan 2026
Viewed by 170
Abstract
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field [...] Read more.
Deep learning has emerged as a transformative methodology in modern cosmology, providing powerful tools to extract meaningful physical information from complex astronomical data. This paper implements a novel Bayesian graph deep learning framework for estimating key cosmological parameters in a primordial magnetic field (PMF) cosmology from simulated Cosmic Microwave Background (CMB) maps. Our methodology utilizes DeepSphere, a spherical convolutional neural network architecture specifically designed to respect the spherical geometry of CMB data through HEALPix pixelization. To advance beyond deterministic point estimates and enable robust uncertainty quantification, we integrate Bayesian Neural Networks (BNNs) into the framework, capturing aleatoric and epistemic uncertainties that reflect the model confidence in its predictions. The proposed approach demonstrates exceptional performance, achieving R2 scores exceeding 89% for the magnetic parameter estimation. We further obtain well-calibrated uncertainty estimates through post hoc training techniques including Variance Scaling and GPNormal. This integrated DeepSphere-BNNs framework delivers accurate parameter estimation from CMB maps with PMF contributions while providing reliable uncertainty quantification, enabling robust cosmological inference in the era of precision cosmology. Full article
(This article belongs to the Section Astroinformatics and Astrostatistics)
36 pages, 4575 KB  
Article
A PI-Dual-STGCN Fault Diagnosis Model Based on the SHAP-LLM Joint Explanation Framework
by Zheng Zhao, Shuxia Ye, Liang Qi, Hao Ni, Siyu Fei and Zhe Tong
Sensors 2026, 26(2), 723; https://doi.org/10.3390/s26020723 - 21 Jan 2026
Viewed by 182
Abstract
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability [...] Read more.
This paper proposes a PI-Dual-STGCN fault diagnosis model based on a SHAP-LLM joint explanation framework to address issues such as the lack of transparency in the diagnostic process of deep learning models and the weak interpretability of diagnostic results. PI-Dual-STGCN enhances the interpretability of graph data by introducing physical constraints and constructs a dual-graph architecture based on physical topology graphs and signal similarity graphs. The experimental results show that the dual-graph complementary architecture enhances diagnostic accuracy to 99.22%. Second, a general-purpose SHAP-LLM explanation framework is designed: Explainable AI (XAI) technology is used to analyze the decision logic of the diagnostic model and generate visual explanations, establishing a hierarchical knowledge base that includes performance metrics, explanation reliability, and fault experience. Retrieval-Augmented Generation (RAG) technology is innovatively combined to integrate model performance and Shapley Additive Explanations (SHAP) reliability assessment through the main report prompt, while the sub-report prompt enables detailed fault analysis and repair decision generation. Finally, experiments demonstrate that this approach avoids the uncertainty of directly using large models for fault diagnosis: we delegate all fault diagnosis tasks and core explainability tasks to more mature deep learning algorithms and XAI technology and only leverage the powerful textual reasoning capabilities of large models to process pre-quantified, fact-based textual information (e.g., model performance metrics, SHAP explanation results). This method enhances diagnostic transparency through XAI-generated visual and quantitative explanations of model decision logic while reducing the risk of large model hallucinations by restricting large models to reasoning over grounded, fact-based textual content rather than direct fault diagnosis, providing verifiable intelligent decision support for industrial fault diagnosis. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 13052 KB  
Article
FGO-PMB: A Factor Graph Optimized Poisson Multi-Bernoulli Filter for Accurate Online 3D Multi-Object Tracking
by Jingyi Jin, Jindong Zhang, Yiming Wang and Yitong Liu
Sensors 2026, 26(2), 591; https://doi.org/10.3390/s26020591 - 15 Jan 2026
Viewed by 218
Abstract
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, [...] Read more.
Three-dimensional multi-object tracking (3D MOT) plays a vital role in enabling reliable perception for LiDAR-based autonomous systems. However, LiDAR measurements often exhibit sparsity, occlusion, and sensor noise that lead to uncertainty and instability in downstream tracking. To address these challenges, we propose FGO-PMB, a unified probabilistic framework that integrates the Poisson Multi-Bernoulli (PMB) filter from Random Finite Set (RFS) theory with Factor Graph Optimization (FGO) for robust LiDAR-based object tracking. In the proposed framework, object states, existence probabilities, and association weights are jointly formulated as optimizable variables within a factor graph. Four factors, including state transition, observation, existence, and association consistency, are formulated to uniformly encode the spatio-temporal constraints among these variables. By unifying the uncertainty modeling capability of RFS with the global optimization strength of FGO, the proposed framework achieves temporally consistent and uncertainty-aware estimation across continuous LiDAR scans. Experiments on KITTI and nuScenes indicate that the proposed method achieves competitive 3D MOT accuracy while maintaining real-time performance. Full article
(This article belongs to the Special Issue Recent Advances in LiDAR Sensing Technology for Autonomous Vehicles)
Show Figures

Figure 1

32 pages, 8491 KB  
Article
Uncertainty Analysis of Seismic Effects on Cultural Relics in Collections: Integrating Deep Learning and Reinforcement Strategies
by Lin He, Zhengyi Xu, Mengting Gong, Weikai Wang, Xiaofei Yang and Jianming Wei
Appl. Sci. 2026, 16(2), 879; https://doi.org/10.3390/app16020879 - 15 Jan 2026
Viewed by 161
Abstract
Due to the unpredictability of seismic and the complexity of collection environments, significant uncertainty exists regarding their impact on cultural relics. Moreover, existing research on the causal analysis of seismic damage to cultural relics remains insufficient, thereby limiting advancements in risk assessment and [...] Read more.
Due to the unpredictability of seismic and the complexity of collection environments, significant uncertainty exists regarding their impact on cultural relics. Moreover, existing research on the causal analysis of seismic damage to cultural relics remains insufficient, thereby limiting advancements in risk assessment and protective measures. To address this issue, this paper proposes a seismic damage risk assessment method for cultural relics in collections, integrating deep learning and reinforcement strategies. The proposed method enhances the dataset on seismic impacts on cultural relics by developing an integrated deep learning-based data correction model. Furthermore, it incorporates a graph attention mechanism to precisely quantify the influence of various attribute factors on cultural relic damage. Additionally, by combining reinforcement learning with the Deep Deterministic Policy Gradient (DDPG) strategy, this method refines seismic risk assessments and formulates more targeted preventive protection measures for cultural relics in collections. This study evaluates the proposed method using three public datasets in comparison with the self-constructed Seismic Damage Dataset of Cultural Relics (CR-SDD). Experiments are conducted to assess and analyze the predictive performance of various models. Experimental results demonstrate that the proposed method achieves an accuracy of 81.21% in assessing seismic damage to cultural relics in collections. This research provides a scientific foundation and practical guidance for the protection of cultural relics, offering strong support for preventive conservation efforts in seismic risk mitigation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

24 pages, 3664 KB  
Review
Global Distribution and Dispersal Pathways of Riparian Invasives: Perspectives Using Alligator Weed (Alternanthera philoxeroides (Mart.) Griseb.) as a Model
by Jia Tian, Jinxia Huang, Yifei Luo, Maohua Ma and Wanyu Wang
Plants 2026, 15(2), 251; https://doi.org/10.3390/plants15020251 - 13 Jan 2026
Viewed by 305
Abstract
In struggling against invasive species ravaging riverscape ecosystems, gaps in dispersal pathway knowledge and fragmented approaches across scales have long stalled effective riparian management worldwide. To reduce these limitations and enhance invasion management strategies, selecting appropriate alien species as models for in-depth pathway [...] Read more.
In struggling against invasive species ravaging riverscape ecosystems, gaps in dispersal pathway knowledge and fragmented approaches across scales have long stalled effective riparian management worldwide. To reduce these limitations and enhance invasion management strategies, selecting appropriate alien species as models for in-depth pathway analysis is essential. Alternanthera philoxeroides (Mart.) Griseb. (alligator weed) emerges as an exemplary model species, boasting an invasion record of around 120 years spanning five continents worldwide, supported by genetic evidence of repeated introductions. In addition, the clonal reproduction of A. philoxeroides supports swift establishment, while its amphibious versatility allows occupation of varied riparian environments, with spread driven by natural water-mediated dispersal (hydrochory) and human-related vectors at multiple scales. Thus, leveraging A. philoxeroides, this review proposes a comprehensive multi-scale framework, which integrates monitoring with remote sensing, environmental DNA, Internet of Things, and crowdsourcing for real-time detection. Also, the framework can further integrate, e.g., MaxEnt (Maximum Entropy Model) for climatic suitability and mechanistic simulations of hydrodynamics and human-mediated dispersal to forecast invasion risks. Furthermore, decision-support systems developed from the framework can optimize controls like herbicides and biocontrol, managing uncertainties adaptively. At the global scale, the dispersal paradigm can employ AI-driven knowledge graphs for genetic attribution, multilayer networks, and causal inference to trace pathways and identify disruptions. Based on the premise that our multi-scale framework can bridge invasion ecology with riverscape management using A. philoxeroides as a model, we contend that the implementation of the proposed framework tackles core challenges, such as sampling biases, shifting environmental dynamics, eco–evolutionary interactions using stratified sampling, and adaptive online algorithms. This methodology is purposed to offer scalable tools for other aquatic invasives, evolving management from reactive measures to proactive, network-based approaches that effectively interrupt dispersal routes. Full article
(This article belongs to the Section Plant Ecology)
Show Figures

Figure 1

21 pages, 6454 KB  
Article
Probabilistic Photovoltaic Power Forecasting with Reliable Uncertainty Quantification via Multi-Scale Temporal–Spatial Attention and Conformalized Quantile Regression
by Guanghu Wang, Yan Zhou, Yan Yan, Zhihan Zhou, Zikang Yang, Litao Dai and Junpeng Huang
Sustainability 2026, 18(2), 739; https://doi.org/10.3390/su18020739 - 11 Jan 2026
Viewed by 279
Abstract
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting [...] Read more.
Accurate probabilistic forecasting of photovoltaic (PV) power generation is crucial for grid scheduling and renewable energy integration. However, existing approaches often produce prediction intervals with limited calibration accuracy, and the interdependence among meteorological variables is frequently overlooked. This study proposes a probabilistic forecasting framework based on a Multi-scale Temporal–Spatial Attention Quantile Regression Network (MTSA-QRN) and an adaptive calibration mechanism to enhance uncertainty quantification and ensure statistically reliable prediction intervals. The framework employs a dual-pathway architecture: a temporal pathway combining Temporal Convolutional Networks (TCN) and multi-head self-attention to capture hierarchical temporal dependencies, and a spatial pathway based on Graph Attention Networks (GAT) to model nonlinear meteorological correlations. A learnable gated fusion mechanism adaptively integrates temporal–spatial representations, and weather-adaptive modules enhance robustness under diverse atmospheric conditions. Multi-quantile prediction intervals are calibrated using conformalized quantile regression to ensure reliable uncertainty coverage. Experiments on a real-world PV dataset (15 min resolution) demonstrate that the proposed method offers more accurate and sharper uncertainty estimates than competitive benchmarks, supporting risk-aware operational decision-making in power systems. Quantitative evaluation on a real-world 40 MW photovoltaic plant demonstrates that the proposed MTSA-QRN achieves a CRPS of 0.0400 before calibration, representing an improvement of over 55% compared with representative deep learning baselines such as Quantile-GRU, Quantile-LSTM, and Quantile-Transformer. After adaptive calibration, the proposed method attains a reliable empirical coverage close to the nominal level (PICP90 = 0.9053), indicating effective uncertainty calibration. Although the calibrated prediction intervals become wider, the model maintains a competitive CRPS value (0.0453), striking a favorable balance between reliability and probabilistic accuracy. These results demonstrate the effectiveness of the proposed framework for reliable probabilistic photovoltaic power forecasting. Full article
(This article belongs to the Topic Sustainable Energy Systems)
Show Figures

Figure 1

31 pages, 3343 KB  
Article
GridFM: A Physics-Informed Foundation Model for Multi-Task Energy Forecasting Using Real-Time NYISO Data
by Ali Sayghe, Mohammed Ahmed Mousa, Salem Batiyah, Abdulrahman Husawi and Mansour Almuwallad
Energies 2026, 19(2), 357; https://doi.org/10.3390/en19020357 - 11 Jan 2026
Viewed by 302
Abstract
The rapid integration of renewable energy sources and increasing complexity of modern power grids demand advanced forecasting tools capable of simultaneously predicting multiple interconnected variables. While time series foundation models (TSFMs) have demonstrated remarkable zero-shot forecasting capabilities across diverse domains, their application in [...] Read more.
The rapid integration of renewable energy sources and increasing complexity of modern power grids demand advanced forecasting tools capable of simultaneously predicting multiple interconnected variables. While time series foundation models (TSFMs) have demonstrated remarkable zero-shot forecasting capabilities across diverse domains, their application in power grid operations remains limited due to complex coupling relationships between load, price, emissions, and renewable generation. This paper proposes GridFM, a novel physics-informed foundation model specifically designed for multi-task energy forecasting in power systems. GridFM introduces four key innovations: (1) a FreqMixer adaptation layer that transforms pre-trained foundation model representations to power-grid-specific patterns through frequency domain mixing without modifying base weights; (2) a physics-informed constraint module embedding power balance equations and zonal grid topology using graph neural networks; (3) a multi-task learning framework enabling joint forecasting of load demand, locational-based marginal prices (LBMP), carbon emissions, and renewable generation with uncertainty-weighted loss functions; and (4) an explainability module utilizing SHAP values and attention visualization for interpretable predictions. We validate GridFM using over 10 years of real-time data from the New York Independent System Operator (NYISO) at 5 min resolution, comprising more than 10 million data points across 11 load zones. Comprehensive experiments demonstrate that GridFM achieves state-of-the-art performance with an 18.5% improvement in load forecasting MAPE (achieving 2.14%), a 23.2% improvement in price forecasting (achieving 7.8% MAPE), and a 21.7% improvement in emission prediction compared to existing TSFMs including Chronos, TimesFM, and Moirai-MoE. Ablation studies confirm the contribution of each proposed component. The physics-informed constraints reduce physically inconsistent predictions by 67%, while the multi-task framework improves individual task performance by exploiting inter-variable correlations. The proposed model provides interpretable predictions supporting the Climate Leadership and Community Protection Act (CLCPA) 2030/2040 compliance objectives, enabling grid operators to make informed decisions for sustainable energy transition and carbon reduction strategies. Full article
Show Figures

Figure 1

39 pages, 2885 KB  
Article
Usability Assessment Framework for Crowdsensing Data and the Implicit Spatiotemporal Information
by Ying Chen, He Zhang, Jixian Zhang, Jing Shen and Yahang Li
ISPRS Int. J. Geo-Inf. 2026, 15(1), 29; https://doi.org/10.3390/ijgi15010029 - 7 Jan 2026
Viewed by 251
Abstract
Crowdsensing data serves as a crucial resource for supporting spatiotemporal applications and services. However, its inherent heterogeneity and quality uncertainty present significant challenges for data usability assessment: the evaluation methods are difficult to standardize due to the diverse types of data; assessment dimensions [...] Read more.
Crowdsensing data serves as a crucial resource for supporting spatiotemporal applications and services. However, its inherent heterogeneity and quality uncertainty present significant challenges for data usability assessment: the evaluation methods are difficult to standardize due to the diverse types of data; assessment dimensions are predominantly confined to internal quality attributes; and a comprehensive framework for data usability evaluation remains lacking. To address these challenges, this study proposes an innovative, multi-layered usability assessment framework applicable to six major categories of crowdsensing data: specialized spatial data, Internet of Things (IoT) sensing data, trajectory data, geographic semantic web, scientific literature, and web texts. Building upon a systematic review of existing research on data quality and usability, our framework conducts a comprehensive evaluation of data efficiency, effectiveness, and satisfaction from dual perspectives—data sources and content. We present a complete system comprising primary and secondary indicators and elaborate on their computation and aggregation methods. Indicator weights are determined through the Analytic Hierarchy Process (AHP) and expert consultations, with sensitivity analysis performed to validate the robustness of the framework. The practical applicability of the framework is demonstrated through a case study of constructing a spatiotemporal knowledge graph, where we assess all six types of data. The results indicate that the framework generates distinguishable usability scores and provides actionable insights for improvement. This framework offers a universal standard for selecting high-quality data in complex decision-making scenarios and facilitates the development of reliable spatiotemporal knowledge services. Full article
Show Figures

Figure 1

28 pages, 1123 KB  
Article
Trust as a Stochastic Phase on Hierarchical Networks: Social Learning, Degenerate Diffusion, and Noise-Induced Bistability
by Dimitri Volchenkov, Nuwanthika Karunathilaka, Vichithra Amunugama Walawwe and Fahad Mostafa
Dynamics 2026, 6(1), 4; https://doi.org/10.3390/dynamics6010004 - 7 Jan 2026
Viewed by 336
Abstract
Empirical debates about a “crisis of trust” highlight long-lived pockets of high trust and deep distrust in institutions, as well as abrupt, shock-induced shifts between the two. We propose a probabilistic model in which such phenomena emerge endogenously from social learning on hierarchical [...] Read more.
Empirical debates about a “crisis of trust” highlight long-lived pockets of high trust and deep distrust in institutions, as well as abrupt, shock-induced shifts between the two. We propose a probabilistic model in which such phenomena emerge endogenously from social learning on hierarchical networks. Starting from a discrete model on a directed acyclic graph, where each agent makes a binary adoption decision about a single assertion, we derive an effective influence kernel that maps individual priors to stationary adoption probabilities. A continuum limit along hierarchical depth yields a degenerate, non-conservative logistic–diffusion equation for the adoption probability u(x,t), in which diffusion is modulated by (1u) and increases the integral of u rather than preserving it. To account for micro-level uncertainty, we perturb these dynamics by multiplicative Stratonovich noise with amplitude proportional to u(1u), strongest in internally polarised layers and vanishing at consensus. At the level of a single depth layer, Stratonovich–Itô conversion and Fokker–Planck analysis show that the noise induces an effective double-well potential with two robust stochastic phases, u0 and u1, corresponding to persistent distrust and trust. Coupled along depth, this local bistability and degenerate diffusion generate extended domains of trust and distrust separated by fronts, as well as rare, Kramers-type transitions between them. We also formulate the associated stochastic partial differential equation in Martin–Siggia–Rose–Janssen–De Dominicis form, providing a field-theoretic basis for future large-deviation and data-informed analyses of trust landscapes in hierarchical societies. Full article
Show Figures

Graphical abstract

19 pages, 778 KB  
Article
GALR: Graph-Based Root Cause Localization and LLM-Assisted Recovery for Microservice Systems
by Wenya Zhang, Zhi Yang, Fang Peng, Le Zhang, Yiting Chen and Ruibo Chen
Electronics 2026, 15(1), 243; https://doi.org/10.3390/electronics15010243 - 5 Jan 2026
Viewed by 460
Abstract
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, [...] Read more.
With the rapid evolution of cloud-native platforms, microservice-based systems have become increasingly large-scale and complex, making fast and accurate root cause localization and recovery a critical challenge. Runtime signals in such systems are inherently multimodal—combining metrics, logs, and traces—and are intertwined through deep, dynamic service dependencies, which often leads to noisy alerts, ambiguous fault propagation paths, and brittle, manually curated recovery playbooks. To address these issues, we propose GALR, a graph- and LLM-based framework for root cause localization and recovery in microservice-based business middle platforms. GALR first constructs a multimodal service call graph by fusing time-series metrics, structured logs, and trace-derived topology, and employs a GAT-based root cause analysis module with temporal-aware edge attention to model failure propagation. On top of this, an LLM-based node enhancement mechanism infers anomaly, normal, and uncertainty scores from log contexts and injects them into node representations and attention bias terms, improving robustness under noisy or incomplete signals. Finally, GALR integrates a retrieval-augmented LLM agent that retrieves similar historical cases and generates executable recovery strategies, with consistency checking against expert-standard playbooks to ensure safety and reproducibility. Extensive experiments on three representative microservice datasets demonstrate that GALR consistently achieves superior Top-k accuracy and mean reciprocal rank for root cause localization, while the retrieval-augmented agent yields substantially more accurate and actionable recovery plans compared with graph-only and LLM-only baselines, providing a practical closed-loop solution from anomaly perception to recovery execution. Full article
(This article belongs to the Special Issue Advanced Techniques for Multi-Agent Systems)
Show Figures

Figure 1

15 pages, 2605 KB  
Article
A Two-Stage Voltage Sag Source Localization Method in Microgrids
by Ruotian Yao, Hao Bai, Shiqi Jiang, Tong Liu, Yiyong Lei and Yawen Zheng
Energies 2026, 19(1), 258; https://doi.org/10.3390/en19010258 - 3 Jan 2026
Viewed by 315
Abstract
Accurate localization of voltage sag sources is crucial for maintaining reliable and stable operation in microgrids with high penetration of distributed generation (DG). However, the complex topology, bidirectional and time-varying power flows, and measurement uncertainty make it difficult for these conventional model-based approaches [...] Read more.
Accurate localization of voltage sag sources is crucial for maintaining reliable and stable operation in microgrids with high penetration of distributed generation (DG). However, the complex topology, bidirectional and time-varying power flows, and measurement uncertainty make it difficult for these conventional model-based approaches to achieve high accuracy. To address these challenges, this paper proposes a two-stage voltage sag source localization method that integrates a data-driven spatio-temporal learning model with a model-based binary search refinement. In the first stage, an improved spatial-temporal graph convolutional network (STGCN) is developed to extract temporal and spatial correlations among voltage and current measurements, enabling section-level localization of sag sources. In the second stage, a binary search–based refinement strategy is applied within the candidate section to iteratively converge on the exact fault location with high precision and robustness. Simulations are conducted on a modified IEEE 33-node system with diverse PV output scenarios, covering combinations of fault types and locations. The results demonstrate that the proposed method maintains stable localization performance under high DG penetration and achieves high accuracy despite multiple fault types and noise interference. Full article
(This article belongs to the Special Issue Modeling, Stability Analysis and Control of Microgrids)
Show Figures

Figure 1

41 pages, 2277 KB  
Article
Navigating Technological Frontiers: Explainable Patent Recommendation with Temporal Dynamics and Uncertainty Modeling
by Kuan-Wei Huang
Symmetry 2026, 18(1), 78; https://doi.org/10.3390/sym18010078 - 2 Jan 2026
Viewed by 387
Abstract
Rapid technological innovation has made navigating millions of new patent filings a critical challenge for corporations and research institutions. Existing patent recommendation systems, largely constrained by their static designs, struggle to capture the dynamic pulse of an ever-evolving technological ecosystem. At the same [...] Read more.
Rapid technological innovation has made navigating millions of new patent filings a critical challenge for corporations and research institutions. Existing patent recommendation systems, largely constrained by their static designs, struggle to capture the dynamic pulse of an ever-evolving technological ecosystem. At the same time, their “black-box” decision-making processes severely limit their trustworthiness and practical value in high-stakes, real-world scenarios. To address this impasse, we introduce TEAHG-EPR, a novel, end-to-end framework for explainable patent recommendation. The core of our approach is to reframe the recommendation task as a dynamic learning and reasoning process on a temporal-aware attributed heterogeneous graph. Specifically, we first construct a sequence of patent knowledge graphs that evolve on a yearly basis. A dual-encoder architecture, comprising a Relational Graph Convolutional Network (R-GCN) and a Bidirectional Long Short-Term Memory network (Bi-LSTM), is then employed to simultaneously capture the spatial structural information within each time snapshot and the evolutionary patterns across time. Building on this foundation, we innovatively introduce uncertainty modeling, learning a dual “deterministic core + probabilistic potential” representation for each entity and balancing recommendation precision with exploration through a hybrid similarity metric. Finally, to achieve true explainability, we design a feature-guided controllable text generation module that can attach a well-reasoned, faithful textual explanation to every single recommendation. We conducted comprehensive experiments on two large-scale datasets: a real-world industrial patent dataset (USPTO) and a classic academic dataset (AMiner). The results are compelling: TEAHG-EPR not only significantly outperforms all state-of-the-art baselines in recommendation accuracy but also demonstrates a decisive advantage across multiple “beyond-accuracy” dimensions, including explanation quality, diversity, and novelty. Full article
Show Figures

Figure 1

23 pages, 14883 KB  
Article
A Structure-Invariant Transformer for Cross-Regional Enterprise Delisting Risk Identification
by Kang Li and Xinyang Li
Sustainability 2026, 18(1), 397; https://doi.org/10.3390/su18010397 - 31 Dec 2025
Viewed by 236
Abstract
Cross-regional enterprise financial distress can undermine long-term corporate viability, weaken regional industrial resilience, and amplify systemic risk, making robust early-warning tools essential for sustainable financial governance. This study investigates the problem of cross-regional enterprise delisting-related distress identification under heterogeneous economic structures and highly [...] Read more.
Cross-regional enterprise financial distress can undermine long-term corporate viability, weaken regional industrial resilience, and amplify systemic risk, making robust early-warning tools essential for sustainable financial governance. This study investigates the problem of cross-regional enterprise delisting-related distress identification under heterogeneous economic structures and highly imbalanced risk samples. We propose a cross-domain learning framework that aims to deliver stable, interpretable, and transferable risk signals across regions without requiring access to labeled data from the target domain. Using a multi-source empirical dataset covering Beijing, Shanghai, Jiangsu, and Zhejiang, we conduct leave-one-domain-out evaluations that simulate real-world regulatory deployment. The results demonstrate consistent improvements over representative sequential and graph-based baselines, indicating stronger cross-regional generalization and more reliable identification of borderline and noisy cases. By linking cross-domain stability with uncertainty-aware risk screening, this work contributes a practical and economically meaningful solution for sustainable corporate oversight, offering actionable value for policy-oriented financial supervision and regional economic sustainability. Full article
Show Figures

Figure 1

Back to TopTop