Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (393)

Search Parameters:
Keywords = Bayesian Reasoning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2002 KB  
Article
Risk Assessment of Coal Mine Ventilation System Based on Fuzzy Polymorphic Bayes: A Case Study of H Coal Mine
by Jin Zhao, Juan Shi and Jinhui Yang
Systems 2026, 14(1), 99; https://doi.org/10.3390/systems14010099 - 16 Jan 2026
Viewed by 216
Abstract
Coal mine ventilation systems face coupled and systemic risks characterized by structural interconnection and disaster chain propagation. In order to accurately quantify and evaluate this overall system risk, this study presents a new method of risk assessment of the coal mine ventilation system [...] Read more.
Coal mine ventilation systems face coupled and systemic risks characterized by structural interconnection and disaster chain propagation. In order to accurately quantify and evaluate this overall system risk, this study presents a new method of risk assessment of the coal mine ventilation system based on fuzzy polymorphic Bayesian networks. This method effectively addresses the shortcomings of traditional assessment approaches in the probabilistic quantification of risk. A Bayesian network with 44 nodes was established from five dimensions: ventilation power, ventilation network, ventilation facilities, human and management factors, and work environment. The risk states were divided into multiple states based on the As Low As Reasonably Practicable (ALARP) metric. The probabilities of evaluation-type root nodes were calculated using fuzzy evaluation, and the subjective bias was corrected by introducing a reliability coefficient. The concept of distance compensation is proposed to flexibly calculate the probabilities of quantitative-type root nodes. Through the verification of the ventilation system of H Coal Mine in Shanxi, China, it is concluded that the high risk of the ventilation system is 18%, and the high-risk probability of the ventilation system caused by the external air leakage of the mine is the largest. The evaluation results are consistent with real-world conditions. The results can provide a reference for improving the safety of the ventilation systems. Full article
(This article belongs to the Special Issue Advances in Reliability Engineering for Complex Systems)
Show Figures

Figure 1

30 pages, 4543 KB  
Article
Dynamic Risk Assessment of the Coal Slurry Preparation System Based on LSTM-RNN Model
by Ziheng Zhang, Rijia Ding, Wenxin Zhang, Liping Wu and Ming Liu
Sustainability 2026, 18(2), 684; https://doi.org/10.3390/su18020684 - 9 Jan 2026
Viewed by 152
Abstract
As the core technology of clean and efficient utilization of coal, coal gasification technology plays an important role in reducing environmental pollution, improving coal utilization, and achieving sustainable energy development. In order to ensure the safe, stable, and long-term operation of coal gasification [...] Read more.
As the core technology of clean and efficient utilization of coal, coal gasification technology plays an important role in reducing environmental pollution, improving coal utilization, and achieving sustainable energy development. In order to ensure the safe, stable, and long-term operation of coal gasification plant, aiming to address the strong subjectivity of dynamic Bayesian network (DBN) prior data in dynamic risk assessment, this study takes the coal slurry preparation system—the main piece of equipment in the initial stage of the coal gasification process—as the research object and uses a long short-term memory (LSTM) model combined with a back propagation (BP) neural network model to optimize DBN prior data. To further validate the superiority of the model, a gated recurrent unit (GRU) model was introduced for comparative verification. The mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination are used to evaluate the generalization ability of the LSTM model. The results show that the LSTM model’s predictions are more accurate and stable. Bidirectional inference is performed on the DBN of the optimized coal slurry preparation system to achieve dynamic reliability analysis. Thanks to the forward reasoning of DBN in the coal slurry preparation system, quantitative analysis of the system’s reliability effects is conducted to clearly demonstrate the trend of system reliability over time, providing data support for stable operation and subsequent upgrades. By conducting reverse reasoning, key events and weak links before and after system optimization can be identified, and targeted improvement measures can be proposed accordingly. Full article
(This article belongs to the Special Issue Process Safety and Control Strategies for Urban Clean Energy Systems)
Show Figures

Figure 1

43 pages, 4289 KB  
Article
A Stochastic Model Approach for Modeling SAG Mill Production and Power Through Bayesian Networks: A Case Study of the Chilean Copper Mining Industry
by Manuel Saldana, Edelmira Gálvez, Mauricio Sales-Cruz, Eleazar Salinas-Rodríguez, Jonathan Castillo, Alessandro Navarra, Norman Toro, Dayana Arias and Luis A. Cisternas
Minerals 2026, 16(1), 60; https://doi.org/10.3390/min16010060 - 6 Jan 2026
Viewed by 228
Abstract
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This [...] Read more.
Semi-autogenous (SAG) milling represents one of the most energy-intensive and variable stages of copper mineral processing. Traditional deterministic models often fail to capture the nonlinear dependencies and uncertainty inherent in industrial operations such as granulometry, solids percentage in the feeding or hardness. This work develops and validates a stochastic model based on Discrete Bayesian networks (BNs) to represent the causal relationships governing SAG Production and SAG Power under uncertainty or partial knowledge of explanatory variables. Discretization is adopted for methodological reasons as well as for operational relevance, since SAG plant decisions are typically made using threshold-based categories. Using operational data from a Chilean mining operation, the model fitted integrates expert-guided structure learning (Hill-Climbing with BDeu/BIC scores) and Bayesian parameter estimation with Dirichlet priors. Although validation indicators show high predictive performance (R2 ≈ 0.85—0.90, RMSE < 0.5 bin, and micro-AUC ≈ 0.98), the primary purpose of the BN is not exact regression but explainable causal inference and probabilistic scenario evaluation. Sensitivity analysis identified water feed and solids percentage as key drivers of throughput (SAG Production), while rotational speed and pressure governed SAG Power behavior. The BN framework effectively balances accuracy and interpretability, offering an explainable probabilistic representation of SAG dynamics. These results demonstrate the potential of stochastic modeling to enhance process control and support uncertainty-aware decision making. Full article
Show Figures

Figure 1

23 pages, 1990 KB  
Article
CXCL1, RANTES, IFN-γ, and TMAO as Differential Biomarkers Associated with Cognitive Change After an Anti-Inflammatory Diet in Children with ASD and Neurotypical Peers
by Luisa Fernanda Méndez-Ramírez, Miguel Andrés Meñaca-Puentes, Luisa Matilde Salamanca-Duque, Marysol Valencia-Buitrago, Andrés Felipe Ruiz-Pulecio, Carlos Alberto Ruiz-Villa, Diana María Trejos-Gallego, Juan Carlos Carmona-Hernández, Sandra Bibiana Campuzano-Castro, Marcela Orjuela-Rodríguez, Vanessa Martínez-Díaz, Jessica Triviño-Valencia and Carlos Andrés Naranjo-Galvis
Med. Sci. 2026, 14(1), 11; https://doi.org/10.3390/medsci14010011 - 26 Dec 2025
Viewed by 281
Abstract
Background/Objective: Neuroimmune and metabolic dysregulation have been increasingly implicated in the cognitive heterogeneity of autism spectrum disorder (ASD). However, it remains unclear whether anti-inflammatory diets engage distinct biological and cognitive pathways in autistic and neurotypical children. This study examined whether a 12-week [...] Read more.
Background/Objective: Neuroimmune and metabolic dysregulation have been increasingly implicated in the cognitive heterogeneity of autism spectrum disorder (ASD). However, it remains unclear whether anti-inflammatory diets engage distinct biological and cognitive pathways in autistic and neurotypical children. This study examined whether a 12-week anti-inflammatory dietary protocol produces group-specific neuroimmune–metabolic signatures and cognitive responses in autistic children, neurotypical children receiving the same diet, and untreated neurotypical controls. Methods: Twenty-two children (11 with ASD, six a on neurotypical diet [NT-diet], and five neurotypical controls [NT-control]) completed pre–post assessments of plasma IFN-γ, CXCL1, RANTES (CCL5), trimethylamine-N-oxide (TMAO), and an extensive ENI-2/WISC-IV neuropsychological battery. Linear mixed-effects models were used to test the Time × Group effects on biomarkers and cognitive domains, adjusting for age, sex, and baseline TMAO. Bayesian estimation quantified individual changes (posterior means, 95% credible intervals, and posterior probabilities). Immune–cognitive coupling was explored using Δ–Δ correlation matrices, network metrics (node strength, degree centrality), exploratory mediation models, and responder (≥0.5 SD domain improvement) versus non-responder analyses. Results: In ASD, the diet induced robust reductions in IFN-γ, RANTES, CXCL1, and TMAO, with decisive Bayesian evidence for IFN-γ and RANTES suppression (posterior P(δ < 0) > 0.99). These shifts were selectively associated with gains in verbal learning, semantic fluency, verbal reasoning, attention, and visuoconstructive abilities, whereas working memory and executive flexibility changes were heterogeneous, revealing executive vulnerability in individuals with smaller TMAO reductions. NT-diet children showed modest but consistent improvements in visuospatial processing, attention, and processing speed, with minimal biomarker changes; NT controls remained biologically and cognitively stable. Network analyses in ASD revealed a dense chemokine-anchored architecture with CXCL1 and RANTES as central hubs linking biomarker reductions to improvements in fluency, memory, attention, and executive flexibility. ΔTMAO predicted changes in executive flexibility only in ASD (explaining >50% of the variance), functioning as a metabolic node of executive susceptibility. Responders displayed larger coordinated decreases in all biomarkers and broader cognitive gains compared to non-responders. Conclusions: A structured anti-inflammatory diet elicits an ASD-specific, coordinated neuroimmune–metabolic response in which suppression of CXCL1 and RANTES and modulation of TMAO are tightly coupled with selective improvements in verbal, attentional, and executive domains. Neurotypical children exhibit modest metabolism-linked cognitive benefits and minimal immune modulation. These findings support a precision-nutrition framework in ASD, emphasizing baseline immunometabolic profiling and network-level biomarkers (CXCL1, RANTES, TMAO) to stratify responders and design combinatorial interventions targeting neuroimmune–metabolic pathways. Full article
(This article belongs to the Section Translational Medicine)
Show Figures

Figure 1

31 pages, 1440 KB  
Article
From Reliability Modelling to Cognitive Orchestration: A Paradigm Shift in Aircraft Predictive Maintenance
by Igor Kabashkin and Timur Tyncherov
Mathematics 2026, 14(1), 76; https://doi.org/10.3390/math14010076 - 25 Dec 2025
Viewed by 230
Abstract
This study formulates predictive maintenance of complex technical systems as a constrained multi-layer probabilistic optimization problem that unifies four interdependent analytical paradigms. The mathematical framework integrates: (i) Weibull reliability modelling with parametric lifetime estimation; (ii) Bayesian posterior updating for dynamic adaptation under uncertainty; [...] Read more.
This study formulates predictive maintenance of complex technical systems as a constrained multi-layer probabilistic optimization problem that unifies four interdependent analytical paradigms. The mathematical framework integrates: (i) Weibull reliability modelling with parametric lifetime estimation; (ii) Bayesian posterior updating for dynamic adaptation under uncertainty; (iii) nonlinear machine-learning inference for data-driven pattern recognition; and (iv) ontology-based semantic reasoning governed by logical axioms and domain-specific constraints. The four layers are synthesized through a formal orchestration operator, defined as a sequential composition, where each sub-operator is governed by explicit mathematical constraints: Weibull cumulative distribution functions, Bayesian likelihood-posterior relationships, gradient-based loss minimization, and description logic entailment. The system operates within a cognitive digital twin architecture, with orchestration convergence formalized through iterative parameter refinement until consistency between numerical predictions and semantic validation is achieved. The framework is validated through a case study on aircraft wheel-hub crack prediction. The mathematical formulation establishes a rigorous analytical foundation for cognitive predictive maintenance systems applicable to safety-critical technical systems including aerospace, energy infrastructure, transportation networks, and industrial machinery. Full article
Show Figures

Figure 1

21 pages, 1185 KB  
Article
Evaluating Model Resilience to Data Poisoning Attacks: A Comparative Study
by Ifiok Udoidiok, Fuhao Li and Jielun Zhang
Information 2026, 17(1), 9; https://doi.org/10.3390/info17010009 - 22 Dec 2025
Viewed by 412
Abstract
Machine learning (ML) has become a cornerstone of critical applications, but its vulnerability to data poisoning attacks threatens system reliability and trustworthiness. Prior studies have begun to investigate the impact of data poisoning and proposed various defense or evaluation methods; however, most efforts [...] Read more.
Machine learning (ML) has become a cornerstone of critical applications, but its vulnerability to data poisoning attacks threatens system reliability and trustworthiness. Prior studies have begun to investigate the impact of data poisoning and proposed various defense or evaluation methods; however, most efforts remain limited to quantifying performance degradation, with little systematic comparison of internal behaviors across model architectures under attack and insufficient attention to interpretability for revealing model vulnerabilities. To tackle this issue, we build a reproducible evaluation pipeline and emphasize the importance of integrating robustness with interpretability in the design of secure and trustworthy ML systems. To be specific, we propose a unified poisoning evaluation framework that systematically compares traditional ML models, deep neural networks, and large language models under three representative attack strategies including label flipping, random corruption, and adversarial insertion, at escalating severity levels of 30%, 50%, and 75%, and integrate LIME-based explanations to trace the evolution of model reasoning. Experimental results demonstrate that traditional models collapse rapidly under label noise, whereas Bayesian LSTM hybrids and large language models maintain stronger resilience. Further interpretability analysis uncovers attribution failure patterns, such as over-reliance on neutral tokens or misinterpretation of adversarial cues, providing insights beyond accuracy metrics. Full article
Show Figures

Figure 1

28 pages, 11604 KB  
Article
How to Prevent Construction Safety Accidents? Exploring Critical Factors with Systems Thinking and Bayesian Networks
by Wei Zhang, Nannan Xue, Yidan Cao and Tingsheng Zhao
Buildings 2026, 16(1), 39; https://doi.org/10.3390/buildings16010039 - 22 Dec 2025
Viewed by 396
Abstract
Construction safety remains a critical concern, with frequent accidents leading to fatalities, severe injuries, and significant economic losses. To address these challenges and enhance accident prevention, this study adopts a systems thinking approach to investigate the causal factors of construction safety accidents. First, [...] Read more.
Construction safety remains a critical concern, with frequent accidents leading to fatalities, severe injuries, and significant economic losses. To address these challenges and enhance accident prevention, this study adopts a systems thinking approach to investigate the causal factors of construction safety accidents. First, drawing on Rasmussen’s risk management framework, this study developed a Construction Accident Causation System (CACS) model that comprises six hierarchical levels and 23 influencing factors. Through the analysis of 331 investigation reports of construction accidents in China, causal factor correlations were refined, and the topological structure and network parameters of the model were determined. This study integrates diagnostic reasoning, sensitivity analysis, and fuzzy mathematics within a Bayesian Network (BN) framework. Through this approach, it identifies the most probable accident pathways and highlights seven critical and three sensitive factors that jointly exacerbate construction safety risks. A real-world case of a formwork collapse in Baotou City is further analyzed to verify the model’s reliability and practical relevance. The results confirm that the integrated CACS and BN framework effectively captures the multi-level interactions among managerial, behavioral, and technical factors, providing a scientific basis for proactive safety management and accident prevention in the construction industry. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

52 pages, 782 KB  
Article
Single-Stage Causal Incentive Design via Optimal Interventions
by Sebastián Bejos, Eduardo F. Morales, Luis Enrique Sucar and Enrique Munoz de Cote
Entropy 2026, 28(1), 4; https://doi.org/10.3390/e28010004 - 19 Dec 2025
Viewed by 321
Abstract
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as [...] Read more.
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as interventions on a function space variable, Γ, which correspond to policy interventions in the principal–follower causal relation. The causal inference target estimand V(Γ) is defined as the expected value of the principal’s utility variable under a specified policy intervention in the post-intervention distribution. In the context of additive-Gaussian independent noise, the estimand V(Γ) decomposes into a two-layer expectation: (i) an inner Gaussian smoothing of the principal’s utility regression; and (ii) an outer averaging over the conditional probability of the follower’s action given the incentive policy. A Gauss–Hermite quadrature method is employed to efficiently estimate the first layer, while a policy-local kernel reweighting approach is used for the second. For offline selection of a single incentive policy, a Functional Causal Bayesian Optimization (FCBO) algorithm is introduced. This algorithm models the objective functional γV(γ) using a functional Gaussian process surrogate defined on a Reproducing Kernel Hilbert Space (RKHS) domain and utilizes an Upper Confidence Bound (UCB) acquisition functional. Consequently, the policy value V(γ) becomes an interventional query that can be answered using offline observational data under standard identifiability assumptions. High-probability cumulative-regret bounds are established in terms of differential information gain for the proposed FBO algorithm. Collectively, these elements constitute the central contributions of the CID framework, which integrates causal inference through identification and estimation with policy search in principal–agent problems under private information. This approach establishes a causal decision-making pipeline that enables commitment to a high-performing incentive in a single-shot game, supported by regret guarantees. Provided that the data used for estimation is sufficient, the resulting offline pipeline is appropriate for scenarios where adaptive deployment is impractical or costly. Beyond the methodological contribution, this work introduces a novel application of causal graphical models and causal reasoning to incentive design and principal–agent problems, which are central to economics and multi-agent systems. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

31 pages, 4844 KB  
Article
GAME-YOLO: Global Attention and Multi-Scale Enhancement for Low-Visibility UAV Detection with Sub-Pixel Localization
by Ruohai Di, Hao Fan, Yuanzheng Ma, Jinqiang Wang and Ruoyu Qian
Entropy 2025, 27(12), 1263; https://doi.org/10.3390/e27121263 - 18 Dec 2025
Viewed by 492
Abstract
Detecting low-altitude, slow-speed, small (LSS) UAVs is especially challenging in low-visibility scenes (low light, haze, motion blur), where inherent uncertainties in sensor data and object appearance dominate. We propose GAME-YOLO, a novel detector that integrates a Bayesian-inspired probabilistic reasoning framework with Global Attention [...] Read more.
Detecting low-altitude, slow-speed, small (LSS) UAVs is especially challenging in low-visibility scenes (low light, haze, motion blur), where inherent uncertainties in sensor data and object appearance dominate. We propose GAME-YOLO, a novel detector that integrates a Bayesian-inspired probabilistic reasoning framework with Global Attention and Multi-Scale Enhancement to improve small-object perception and sub-pixel-level localization. Built on YOLOv11, our framework comprises: (i) a visibility restoration front-end that probabilistically infers and enhances latent image clarity; (ii) a global-attention-augmented backbone that performs context-aware feature selection; (iii) an adaptive multi-scale fusion neck that dynamically weights feature contributions; (iv) a sub-pixel-aware small-object detection head (SOH) that leverages high-resolution feature grids to model sub-pixel offsets; and (v) a novel Shape-Aware IoU loss combined with focal loss. Extensive experiments on the LSS2025-DET dataset demonstrate that GAME-YOLO achieves state-of-the-art performance, with an AP@50 of 52.0% and AP@[0.50:0.95] of 32.0%, significantly outperforming strong baselines such as LEAF-YOLO (48.3% AP@50) and YOLOv11 (36.2% AP@50). The model maintains high efficiency, operating at 48 FPS with only 7.6 M parameters and 19.6 GFLOPs. Ablation studies confirm the complementary gains from our probabilistic design choices, including a +10.5 pp improvement in AP@50 over the baseline. Cross-dataset evaluation on VisDrone-DET2021 further validates its generalization capability, achieving 39.2% AP@50. These results indicate that GAME-YOLO offers a practical and reliable solution for vision-based UAV surveillance by effectively marrying the efficiency of deterministic detectors with the robustness principles of Bayesian inference. Full article
(This article belongs to the Special Issue Bayesian Networks and Causal Discovery)
Show Figures

Figure 1

29 pages, 12360 KB  
Article
Vision-Guided Dynamic Risk Assessment for Long-Span PC Continuous Rigid-Frame Bridge Construction Through DEMATEL–ISM–DBN Modelling
by Linlin Zhao, Qingfei Gao, Yidian Dong, Yajun Hou, Liangbo Sun and Wei Wang
Buildings 2025, 15(24), 4543; https://doi.org/10.3390/buildings15244543 - 16 Dec 2025
Viewed by 355
Abstract
In response to the challenges posed by the complex evolution of risks and the static nature of traditional assessment methods during the construction of long-span prestressed concrete (PC) continuous rigid-frame bridges, this study proposes a risk assessment framework that integrates visual perception with [...] Read more.
In response to the challenges posed by the complex evolution of risks and the static nature of traditional assessment methods during the construction of long-span prestressed concrete (PC) continuous rigid-frame bridges, this study proposes a risk assessment framework that integrates visual perception with dynamic probabilistic reasoning. By combining an improved YOLOv8 model with the Decision-making Trial and Evaluation Laboratory–InterpretiveStructure Modeling (DEMATEL–ISM) algorithm, the framework achieves intelligent identification of risk elements and causal structure modelling. On this basis, a dynamic Bayesian network (DBN) is constructed, incorporating a sliding window and forgetting factor mechanism to enable adaptive updating of conditional probability tables. Using the Tongshun River Bridge as a case study, at the identification layer, we refine onsite targets into 14 risk elements (F1–F14). For visualization, these are aggregated into four categories—“Bridge, Person, Machine, Environment”—to enhance readability. In the methodology layer, leveraging causal a priori information provided by DEMATEL–ISM, risk elements are mapped to scenario probabilities, enabling scenario-level risk assessment and grading. This establishes a traceable closed-loop process from “elements” to “scenarios.” The results demonstrate that the proposed approach effectively identifies key risk chains within the “human–machine–environment–bridge” system, revealing phase-specific peaks in human-related risks and cumulative increases in structural and environmental risks. The particle filter and Monte Carlo prediction outputs generate short-term risk evolution curves with confidence intervals, facilitating the quantitative classification of risk levels. Overall, this vision-guided dynamic risk assessment method significantly enhances the real-time responsiveness, interpretability, and foresight of bridge construction safety management and provides a promising pathway for proactive risk control in complex engineering environments. Full article
(This article belongs to the Special Issue Big Data and Machine/Deep Learning in Construction)
Show Figures

Figure 1

15 pages, 1377 KB  
Article
Fault Detection and Classification of Power Lines Based on Bayes–LSTM–Attention
by Chen Yang, Hao Li, Wenhui Zeng, Jiayuan Fan and Zhichao Ren
Energies 2025, 18(24), 6483; https://doi.org/10.3390/en18246483 - 11 Dec 2025
Viewed by 410
Abstract
As a critical component of the power system, transmission lines play a significant role in ensuring the safe and stable operation of the power grid. To address the challenge of accurately characterizing complex and diverse fault types, this paper proposes a fault detection [...] Read more.
As a critical component of the power system, transmission lines play a significant role in ensuring the safe and stable operation of the power grid. To address the challenge of accurately characterizing complex and diverse fault types, this paper proposes a fault detection and classification method for power lines that integrates Bayesian Reasoning (BR), Long Short-Term Memory (LSTM) networks, and the Attention mechanism. This approach effectively improves the accuracy of fault classification. Bayesian Reasoning is used to adjust the hyperparameters of the LSTM, while the LSTM network processes sequential data efficiently through its gating mechanism. The self-Attention mechanism adaptively assigns weights by focusing on the relationships between information at different positions in the sequence, capturing global dependencies. Test results demonstrate that the proposed Bayes–LSTM–Attention model achieves a fault classification accuracy of 94.5% for transmission lines, a significant improvement compared to the average accuracy of 80% achieved by traditional SVM multi-class classifiers. This indicates that the model has high precision in classifying transmission line faults. Additionally, the evaluation of classification results using the polygon area metric shows that the model exhibits balanced and robust performance in fault classification. Full article
Show Figures

Figure 1

24 pages, 4736 KB  
Article
Navigation Risk Assessment of Arctic Shipping Routes Based on Bayesian Networks
by Xiaoming Huang, Qi Wang, Xianling Li, Yanlin Wang, Xiufeng Yue, Qianjin Yue and Dayong Zhang
J. Mar. Sci. Eng. 2025, 13(12), 2306; https://doi.org/10.3390/jmse13122306 - 4 Dec 2025
Viewed by 614
Abstract
In recent years, the Arctic region, with its abundant oil and gas resources, has become a new focus of global resource development. However, the complex natural environment, especially the effect of sea ice, poses a serious threat and challenge to the navigation safety. [...] Read more.
In recent years, the Arctic region, with its abundant oil and gas resources, has become a new focus of global resource development. However, the complex natural environment, especially the effect of sea ice, poses a serious threat and challenge to the navigation safety. Accordingly, this paper focuses on the navigation risks of drilling ships in five sea areas of the Northeast Passage of the Arctic under the influence of environmental factors. A dynamic Bayesian network structure was established using the Interpretative Structural Model–Bayesian network method. Since some risk elements cannot be directly measured, the combined weight method is adopted to fill the sample data. The navigation risk situations of the five sea areas is analyzed by forward causal reasoning. Through reverse diagnostic reasoning, the main risk factors affecting navigation are obtained, and relevant suggestions are given. This has important implications for improving the ability of accident prevention and emergency handling in practical applications. The model was verified through instance verification based on scenario analysis and model verification based on sample data. The average accuracy rate of the obtained model is 83.4%. The results show that the model has certain validity and practicability in the analysis of navigation risks in Arctic shipping routes. Full article
(This article belongs to the Special Issue Risk Assessment and Prediction of Marine Equipment)
Show Figures

Figure 1

25 pages, 360 KB  
Article
Disentangling Boltzmann Brains, the Time-Asymmetry of Memory, and the Second Law
by David Wolpert, Carlo Rovelli and Jordan Scharnhorst
Entropy 2025, 27(12), 1227; https://doi.org/10.3390/e27121227 - 3 Dec 2025
Viewed by 1219
Abstract
Are your perceptions, memories and observations merely a statistical fluctuation arising from of the thermal equilibrium of the universe, bearing no correlation to the actual past state of the universe? Arguments are given in the literature for and against this “Boltzmann brain” hypothesis. [...] Read more.
Are your perceptions, memories and observations merely a statistical fluctuation arising from of the thermal equilibrium of the universe, bearing no correlation to the actual past state of the universe? Arguments are given in the literature for and against this “Boltzmann brain” hypothesis. Complicating these arguments have been the many subtle—and very often implicit—joint dependencies among these arguments and others that have been given for the past hypothesis, the second law, and even for Bayesian inference of the reliability of experimental data. These dependencies can easily lead to circular reasoning. To avoid this problem, since all of these arguments involve the stochastic properties of the dynamics of the universe’s entropy, we begin by formalizing that dynamics as a time-symmetric, time-translation invariant Markov process, which we call the entropy conjecture. Crucially, like all stochastic processes, the entropy conjecture does not specify any time(s) which it should be conditioned on in order to infer the stochastic dynamics of our universe’s entropy. Any such choice of conditioning times and associated entropy values must be introduced as an independent assumption. This observation allows us to disentangle the standard Boltzmann brain hypothesis, its “1000CE” variant, the past hypothesis, the second law, and the reliability of our experimental data, all in a fully formal manner. In particular, we show that these all make an arbitrary assumption that the dynamics of the universe’s entropy should be conditioned on a single event at a single moment in time, differing only in the details of their assumptions. In this aspect, the Boltzmann brain hypothesis and the second law are equally legitimate (or not). Full article
Show Figures

Figure 1

19 pages, 365 KB  
Article
From Exponential to Efficient: A Novel Matrix-Based Framework for Scalable Medical Diagnosis
by Mohammed Addou, El Bekkaye Mermri and Mohammed Gabli
BioMedInformatics 2025, 5(4), 68; https://doi.org/10.3390/biomedinformatics5040068 - 2 Dec 2025
Viewed by 397
Abstract
Modern diagnostic systems face computational challenges when processing exponential disease-symptom combinations, with traditional approaches requiring up to 2n evaluations for n symptoms. This paper presents MARS (Matrix-Accelerated Reasoning System), a diagnostic framework combining Case-Based Reasoning with matrix representations and intelligent filtering to [...] Read more.
Modern diagnostic systems face computational challenges when processing exponential disease-symptom combinations, with traditional approaches requiring up to 2n evaluations for n symptoms. This paper presents MARS (Matrix-Accelerated Reasoning System), a diagnostic framework combining Case-Based Reasoning with matrix representations and intelligent filtering to address these limitations. The approach encodes disease-symptom relationships as matrices enabling parallel processing, implements adaptive rule-based filtering to prioritize relevant cases, and features automatic rule generation with continuous learning through a dynamically updated Pertinence Matrix. MARS was evaluated on four diverse medical datasets (41 to 721 diseases) and compared against Decision Tree, Random Forest, k-Nearest Neighbors, Support Vector Classifier, Bayesian classifiers, and Neural Networks. On the most challenging dataset (721 diseases, 49,365 test cases), MARS achieved the highest accuracy (87.34%) with substantially reduced processing time. When considering differential diagnosis, accuracy reached 98.33% for top-5 suggestions. These results demonstrate that MARS effectively balances diagnostic accuracy, computational efficiency, and interpretability, three requirements critical for clinical deployment. The framework’s ability to provide ranked differential diagnoses and update incrementally positions it as a practical solution for diverse clinical settings. Full article
Show Figures

Figure 1

32 pages, 1030 KB  
Review
A Review of Deep Learning Approaches Based on Segment Anything Model for Medical Image Segmentation
by Dina Koishiyeva, Dinargul Mukhammejanova, Jeong Won Kang and Assel Mukasheva
Bioengineering 2025, 12(12), 1312; https://doi.org/10.3390/bioengineering12121312 - 29 Nov 2025
Viewed by 2305
Abstract
Medical image segmentation has undergone significant changes in recent years, mainly due to the development of base models. The introduction of the Segment Anything Model (SAM) represents a major shift from task-specific architectures to universal architectures. This review discusses the adaptation of SAM [...] Read more.
Medical image segmentation has undergone significant changes in recent years, mainly due to the development of base models. The introduction of the Segment Anything Model (SAM) represents a major shift from task-specific architectures to universal architectures. This review discusses the adaptation of SAM in medical visualisation, focusing on three primary domains. Firstly, multimodal fusion frameworks implement semantic alignment of heterogeneous visual methods. Secondly, volumetric extensions transition from slice-based processing to native 3D spatial reasoning with architectures such as SAM3D, ProtoSAM-3D, and VISTA3D. Thirdly, uncertainty-aware architectures integrate probabilistic calibration for clinical interpretability, as illustrated by the SAM-U and E-Bayes SAM models. A comparative analysis reveals that SAM derivatives with effective parameters achieve Dice coefficients of 81–95%, while concomitantly reducing annotation requirements by 56–73%. Future research directions include incorporating adaptive domain hints, Bayesian self-correction mechanisms, and unified volumetric frameworks to enable autonomous generalisation across diverse medical imaging contexts. Full article
Show Figures

Figure 1

Back to TopTop