Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (100)

Search Parameters:
Keywords = logical–probabilistic method

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1001 KB  
Review
Antivirus Systems: Detection Methods and Architectures
by Paul A. Gagniuc
Algorithms 2026, 19(5), 345; https://doi.org/10.3390/a19050345 - 1 May 2026
Viewed by 40
Abstract
Antivirus systems have evolved from static pattern matchers into complex algorithmic ecosystems that encapsulate the broader logic of modern cybersecurity. This review deconstructs their internal architecture, tracing the transition from deterministic string-matching automata to probabilistic, behavioral, and cloud-assisted paradigms. Foundational modules such as [...] Read more.
Antivirus systems have evolved from static pattern matchers into complex algorithmic ecosystems that encapsulate the broader logic of modern cybersecurity. This review deconstructs their internal architecture, tracing the transition from deterministic string-matching automata to probabilistic, behavioral, and cloud-assisted paradigms. Foundational modules such as scanners, heuristic analyzers, behavioral monitors, and sandbox environments operate as interconnected computational strata, forming adaptive feedback loops that mirror principles of distributed intelligence. Signature-based methods, such as Aho-Corasick, Boyer-Moore, and Wu-Manber, remain core to real-time filtering, while probabilistic reasoning through Bayesian inference, Markov modeling, and Hidden Markov Models extends detection to polymorphic and metamorphic threats. Behavioral analysis, empowered by Support Vector Machines, deep neural architectures, and temporal models, enables semantic inference over system-call graphs and runtime telemetry. Moreover, cloud-assisted frameworks integrate federated learning and global reputation graphs, which transform detection into a collective intelligence process. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 1954 KB  
Article
Flexible Load Reserve Capacity Evaluation Method Considering User Response Willingness for Sustainable Reserve Provision
by Zhongxi Ou, Lihong Qian, Sui Peng, Weijie Wu, Liang Zhang, Mingqian Feng, Chuyuan Hong, Haoran Shen and Wei Dai
Energies 2026, 19(9), 2165; https://doi.org/10.3390/en19092165 - 30 Apr 2026
Viewed by 57
Abstract
In future active distribution networks with high penetrations of renewable energy, flexible loads are expected to play an increasingly important role as reserve resources to support the sustainable and reliable operation of power grids. Accurate evaluation of flexible load reserve capacity is therefore [...] Read more.
In future active distribution networks with high penetrations of renewable energy, flexible loads are expected to play an increasingly important role as reserve resources to support the sustainable and reliable operation of power grids. Accurate evaluation of flexible load reserve capacity is therefore essential for reliable reserve scheduling. Existing research mainly focuses on the operational characteristics and physical constraints of flexible loads, while insufficiently accounting for user response willingness and the uncertainty of user decision-making behavior, which may lead to biased reserve capacity assessments and impair the sustainability of reserve supply in actual grid operation. To address this issue, this paper proposes a results-oriented reserve capacity evaluation method for flexible loads that explicitly incorporates user response willingness. Specifically, a fuzzy logic system is developed to quantitatively characterize the response willingness of electric vehicle (EV) and air-conditioning (AC) users under multiple influencing factors. Then, a probabilistic modeling approach for user decision-making behavior is established using the theory of planned behavior, enabling explicit representation of behavioral uncertainty. Furthermore, a comprehensive reserve capacity evaluation framework for flexible loads is constructed by integrating user willingness states, sustainable response duration, and operational power constraints. Finally, the case studies demonstrate that the proposed method can effectively improve the objectivity of flexible load reserve capacity assessments while maintaining high user participation willingness, thus supporting the long-term sustainable application of flexible loads as grid reserve resources. Full article
Show Figures

Figure 1

28 pages, 3003 KB  
Article
Short-Term Wind Power Non-Crossing Quantile Forecasting Based on Two-Stage Multi-Similarity Segment Matching
by Dengxin Ai, Li Zhang, Junbang Lv, Song Liu, Zhigang Huang and Lei Yan
Processes 2026, 14(8), 1310; https://doi.org/10.3390/pr14081310 - 20 Apr 2026
Viewed by 219
Abstract
Accurate wind power forecasting is essential for the stability of modern power systems. However, current probabilistic forecasting frameworks often encounter a fundamental conflict between the computational efficiency required for high-dimensional meteorological pattern matching and the physical consistency of the resulting probability distributions. Existing [...] Read more.
Accurate wind power forecasting is essential for the stability of modern power systems. However, current probabilistic forecasting frameworks often encounter a fundamental conflict between the computational efficiency required for high-dimensional meteorological pattern matching and the physical consistency of the resulting probability distributions. Existing methods frequently fail to maintain the logical monotonicity of quantiles or overlook the fine-grained temporal correlations in massive historical datasets. To address these critical gaps, this research develops a comprehensive framework that synergizes a hierarchical similarity filtering mechanism with a structurally constrained non-crossing quantile regression model. First, the target sample is partitioned into several weather segments, and a new two-stage high-similarity weather pattern matching method is developed to screen multiple sets of historical samples that are highly similar to the target weather pattern. Second, a deep learning model for probabilistic wind power quantile forecasting is proposed, which incorporates historical data augmentation. The model utilizes an attention mechanism to extract the correlation between the target and historical segments, while an improved non-crossing quantile regression model is adopted to ensure the validity of the output quantiles. Finally, the effectiveness of the proposed method is validated through case studies using real-world data from an actual wind farm. Full article
(This article belongs to the Special Issue Applications of Smart Microgrids in Renewable Energy Development)
Show Figures

Figure 1

29 pages, 6096 KB  
Article
Optimal Hydraulic Design of Flexible-Lined Channels Using the VegyRap QGIS Tool with Cost and Reliability Analysis
by Ahmed M. Tawfik and Mohamed H. Elgamal
Water 2026, 18(8), 957; https://doi.org/10.3390/w18080957 - 17 Apr 2026
Viewed by 226
Abstract
Previous approaches to flexible-lined channel design typically isolate least-cost cross-section optimization from parameter uncertainty, or restrict reliability analysis to specific cases, limited failure modes, and proprietary codes. This paper presents VegyRap, an open-source QGIS-based plugin with an intuitive graphical user interface that unites [...] Read more.
Previous approaches to flexible-lined channel design typically isolate least-cost cross-section optimization from parameter uncertainty, or restrict reliability analysis to specific cases, limited failure modes, and proprietary codes. This paper presents VegyRap, an open-source QGIS-based plugin with an intuitive graphical user interface that unites these traditionally disjointed, sequential tasks into a single computational framework. The tool guides designers sequentially through: (i) terrain-driven longitudinal profile optimization using dynamic programming; (ii) least-cost cross-sectional optimization for riprap and vegetated linings; and (iii) multi-mode probabilistic reliability analysis coupled with dual risk–cost Pareto optimization. To seamlessly handle the stochastic behavior of uncertain variables, the framework features built-in statistical distributions and allows users to flexibly evaluate up to four distinct failure modes: overtopping, erosion, sedimentation, and near-critical flow oscillation. The framework’s capabilities are demonstrated through nine diverse design examples, incorporating benchmark validations against published studies and a comprehensive real-world case study in Wadi Al-Arja, Saudi Arabia. Results highlight that for vegetated channels, a hierarchical two-phase design logic is essential to satisfy both establishment-phase stability (Class E) and long-term conveyance (Class B). While benchmark comparisons show VegyRap achieves consistent cost reductions of 10–15% over traditional methods, the case study demonstrates that deterministic least-cost solutions can carry non-negligible failure probabilities. By utilizing marginal efficiency analysis to identify cost-effective enhancements, the integrated Pareto-based dual optimization produces transparent trade-off surfaces, empowering practitioners to transition from a single least-cost solution to a defensible, risk-calibrated preferred alternative. Full article
(This article belongs to the Section Hydraulics and Hydrodynamics)
Show Figures

Figure 1

20 pages, 2607 KB  
Article
A Data-Driven Methodology for Developing a Future Design Day Flight Schedule (DDFS)
by Eunji Kim, Seokjae Yun and Hojong Baik
Aerospace 2026, 13(3), 293; https://doi.org/10.3390/aerospace13030293 - 19 Mar 2026
Viewed by 295
Abstract
The design day flight schedule (DDFS) plays a pivotal role in airport simulation and infrastructure planning. Despite its importance, previous studies and global guidelines offer only broad recommendations for DDFS preparation, lacking detailed methodologies and empirical validation. This study proposes a systematic, data-driven [...] Read more.
The design day flight schedule (DDFS) plays a pivotal role in airport simulation and infrastructure planning. Despite its importance, previous studies and global guidelines offer only broad recommendations for DDFS preparation, lacking detailed methodologies and empirical validation. This study proposes a systematic, data-driven approach for generating a future DDFS that accounts for projected demand, airline behavior, and regional traffic characteristics. Leveraging historical flight operation data and probabilistic distributions, the proposed method captures existing patterns and anticipated market changes comprehensively. To realistically define each flight’s operational characteristics, a structured 10-step procedure is employed to generate and assign attributes—such as aircraft type, origin/destination airport, and turnaround time—based on empirical patterns and logical constraints. The proposed approach is applied to Incheon International Airport as a case study, demonstrating its practical utility and scalability. The generated DDFSs are shown to be consistent with target-year forecasts in terms of peak-hour operations and fleet composition, with deviations remaining within a small error range. Additional validation confirms that key operational characteristics, including airline shares, connection patterns, and turnaround times, are reproduced with acceptable accuracy. By bridging the gap between high-level guidance and implementable practice, this study contributes a replicable framework for future DDFS generation and provides actionable insights for airport planners aiming to better anticipate operational demands. Full article
(This article belongs to the Special Issue Next-Generation Airport Operations and Management)
Show Figures

Figure 1

23 pages, 1920 KB  
Article
Improving Hardware Security Through Logic-Probability- Guided Gate Replacement Using Emerging Devices
by Massimo Mikio Martini and Nikhil Saxena
Electronics 2026, 15(6), 1267; https://doi.org/10.3390/electronics15061267 - 18 Mar 2026
Viewed by 393
Abstract
Security threats in the integrated circuit (IC) supply chain are intensifying as demand drives fabrication to off-shore, potentially untrusted foundries. To mitigate theft and reverse engineering, recent work has focused on logic locking, encryption, and camouflaging. This paper introduces a probabilistic logic-driven algorithm [...] Read more.
Security threats in the integrated circuit (IC) supply chain are intensifying as demand drives fabrication to off-shore, potentially untrusted foundries. To mitigate theft and reverse engineering, recent work has focused on logic locking, encryption, and camouflaging. This paper introduces a probabilistic logic-driven algorithm that selects optimal locations for polymorphic gate replacement to strengthen circuit protection. Our approach leverages emerging polymorphic devices—namely the Giant Spin-Hall Effect (GSHE) switch, the 5-terminal magnetic domain wall motion (DWM) device, and the threshold-voltage-defined (TVD) switch—to diversify functional behavior and obscure true circuit intent. Evaluated on ISCAS-85 and ISCAS-89 benchmarks under state-of-the-art SAT and AppSAT Attacks, the proposed method substantially increases decryption time while achieving a marked improvement in Output Corruption Rate (OCR) relative to prior techniques. In particular, by deploying the GSHE Switch at the highest-probability nodes, we achieve more than 40% OCR along with strong resilience against SAT and AppSAT Attacks, further demonstrating the effectiveness of the proposed approach as a practical and scalable hardware obfuscation strategy. Full article
Show Figures

Figure 1

18 pages, 1967 KB  
Article
Fault-Tolerant Hybrid Decoder for Quantum Surface Codes on Probabilistic Inference and Topological Clustering
by Xingyu Qiao, Xiaoxuan Xu, Hongyang Ma and Tianhui Qiu
Appl. Sci. 2026, 16(5), 2586; https://doi.org/10.3390/app16052586 - 8 Mar 2026
Viewed by 560
Abstract
Quantum error correction is a prerequisite for quantum computing; however, the performance critically depends on the accuracy of the decoding algorithm. To address these challenges, we propose a hybrid decoding architecture, BP + UF + BP. The protocol initiates with a truncated global [...] Read more.
Quantum error correction is a prerequisite for quantum computing; however, the performance critically depends on the accuracy of the decoding algorithm. To address these challenges, we propose a hybrid decoding architecture, BP + UF + BP. The protocol initiates with a truncated global BP stage to extract probabilistic gradients without requiring full convergence. This soft information guides a reliability-based Union-Find (UF) algorithm to prioritize high-likelihood error mechanisms. Finally, a local subgraph BP refinement maximizes correction accuracy. Numerical simulations on rotated surface codes under circuit-level depolarizing noise demonstrate a fault-tolerance threshold of approximately 0.72%. This significantly outperforms standard Minimum Weight Perfect Matching (MWPM) and Union-Find (UF) baselines. Notably, our method significantly reduces the logical error rate compared to the conventional decoders. With its empirically near-linear scaling under fixed iteration, the proposed architecture presents a scalable solution for real-time fault-tolerant quantum computing. Full article
(This article belongs to the Section Quantum Science and Technology)
Show Figures

Figure 1

48 pages, 3619 KB  
Article
Comparative Assessment of the Reliability of Non-Recoverable Subsystems of Mining Electronic Equipment Using Various Computational Methods
by Nikita V. Martyushev, Boris V. Malozyomov, Anton Y. Demin, Alexander V. Pogrebnoy, Georgy E. Kurdyumov, Viktor V. Kondratiev and Antonina I. Karlina
Mathematics 2026, 14(4), 723; https://doi.org/10.3390/math14040723 - 19 Feb 2026
Cited by 1 | Viewed by 522
Abstract
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, [...] Read more.
The assessment of reliability in non-repairable subsystems of mining electronic equipment represents a computationally challenging problem, particularly for complex and highly connected structures. This study presents a systematic comparative analysis of several deterministic approaches for reliability estimation, focusing on their computational efficiency, accuracy, and applicability. The investigated methods include classical boundary techniques (minimal paths and cuts), analytical decomposition based on the Bayes theorem, the logic–probabilistic method (LPM) employing triangle–star transformations, and the algorithmic Structure Convolution Method (SCM), which is based on matrix reduction of the system’s connectivity graph. The reliability problem is formally represented using graph theory, where each element is modeled as a binary variable with independent failures, which is a standard and practically justified assumption for power electronic subsystems operating without common-cause coupling. Numerical experiments were carried out on canonical benchmark topologies—bridge, tree, grid, and random connected graphs—representing different levels of structural complexity. The results demonstrate that the SCM achieves exact reliability values with up to six orders of magnitude acceleration compared to the LPM for systems containing more than 20 elements, while maintaining polynomial computational complexity. Qualitatively, the compared approaches differ in the nature of the output and practical applicability: boundary methods provide fast interval estimates suitable for preliminary screening, whereas decomposition may exhibit a systematic bias for highly connected (non-series–parallel) topologies. In contrast, the SCM consistently preserves exactness while remaining computationally tractable for medium and large sparse-to-moderately dense graphs, making it preferable for repeated recalculations in design and optimization workflows. The methods were implemented in Python 3.7 using NumPy and NetworkX, ensuring transparency and reproducibility. The findings confirm that the SCM is an efficient, scalable, and mathematically rigorous tool for reliability assessment and structural optimization of large-scale non-repairable systems. The presented methodology provides practical guidelines for selecting appropriate reliability evaluation techniques based on system complexity and computational resource constraints. Full article
Show Figures

Figure 1

29 pages, 3196 KB  
Review
The Remote Sensing Geostatistical Paradigm: A Review of Key Technologies and Applications
by Junyu He
Remote Sens. 2026, 18(4), 600; https://doi.org/10.3390/rs18040600 - 14 Feb 2026
Viewed by 542
Abstract
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. [...] Read more.
Advancements in earth observation technologies are ushering in the big data era, yet this potential is compromised by intrinsic challenges: inherent uncertainty, spatiotemporal heterogeneity, multi-scale character, and pervasive data gaps. Traditional methods often fail to address these issues within a single, coherent system. The main contributions of this review are to systematically establish the Remote Sensing Geostatistical Paradigm (RSGP) as a comprehensive, unified framework. Powered by its core theory, Bayesian Maximum Entropy (BME), RSGP is a broadly designed epistemic framework that transcends a mere conceptual reorganization of established methods. It addresses the above challenges by highlighting two pivotal concepts within a spatiotemporal random field: (1) uncertainty quantification via probabilistic soft data, which redefines observations as probability density functions, representing a fundamental epistemological shift from deterministic scalars to probabilistic entities, and provides a universal interface for rigorous assimilation of heterogeneous remote sensing or in situ observations and synergy with other computational models, such as machine learning; and (2) spatiotemporal structure exploitation, which integrates the underlying structure embedded in remote sensing data of natural attributes, moving beyond mere optical properties to incorporate a broader range of available spatiotemporal information, for robust estimation and mapping purposes. Furthermore, the evolution of key technologies is illustrated by using real-world application cases, guiding how to implement RSGP in terms of different scenarios. Finally, the paradigm’s features and limitations are discussed. This synthesis provides the remote sensing community with a robust foundation for uncertainty-aware analysis and multi-source integration, bridging geostatistical logic with next-generation AI-driven Earth observation. Full article
(This article belongs to the Section Remote Sensing for Geospatial Science)
Show Figures

Figure 1

42 pages, 2537 KB  
Article
UPSET: A Comprehensive Probabilistic Single Event Transient Analysis Flow for VLSI Circuits Using Static Timing Analysis
by Christos Georgakidis, Dimitris Valiantzas, Nikolaos Chatzivangelis, Marko Andjelkovic, Christos Sotiriou and Milos Krstic
Electronics 2026, 15(4), 818; https://doi.org/10.3390/electronics15040818 - 13 Feb 2026
Viewed by 493
Abstract
The downscaling of VLSI technologies has exacerbated the susceptibility of integrated circuits (ICs) to radiation-induced Single-Event Transients (SETs). This work presents UPSET, a comprehensive and technology-independent EDA framework for probabilistic SET analysis using Static Timing Analysis (STA). Unlike traditional simulation-based methods that suffer [...] Read more.
The downscaling of VLSI technologies has exacerbated the susceptibility of integrated circuits (ICs) to radiation-induced Single-Event Transients (SETs). This work presents UPSET, a comprehensive and technology-independent EDA framework for probabilistic SET analysis using Static Timing Analysis (STA). Unlike traditional simulation-based methods that suffer from prohibitive runtimes, UPSET leverages graph-based propagation with advanced logical, electrical, and timing-window masking models to evaluate circuit sensitivity efficiently. Key contributions include a novel “Electrical Masking Window” (EMW) criterion that effectively filters non-full-rail pulses early in reconvergent logic and a TimeStamp-based propagation mode that accurately handles complex signal reconvergence with Boolean evaluation. The experimental results over some featured benchmarks demonstrate a speedup of more than 25,000× compared with SPICE while maintaining a tight 4.56% error bound in pulse width estimation. Moreover, experimental validation on 50 benchmarks across varying complexities showcases that EMW enhancement reduces the pessimism to circuit sensitivity by up to 25% on average, providing tighter upper bounds while maintaining scalability to million-gate designs. By integrating seamlessly with standard industrial formats (LEF, DEF, LIB, or SPEF), UPSET enables scalable, accurate soft SET sensitivity assessment for modern digital designs, establishing a robust foundation for automated radiation hardening flows. Full article
Show Figures

Figure 1

14 pages, 15601 KB  
Article
Hardware-Efficient Stochastic Computing-Based Neural Networks with SNN-Isomorphic LIF Activation
by Jiho Kim, Kaeun Lim and Youngmin Kim
Electronics 2026, 15(4), 768; https://doi.org/10.3390/electronics15040768 - 11 Feb 2026
Viewed by 602
Abstract
Recent advances in artificial intelligence have made power efficiency a primary objective in system design. In this context, stochastic computing (SC), which processes probabilistic bitstreams using simple logic, and spiking neural networks (SNNs), a neuromorphic paradigm, have gained prominence as alternative approaches. This [...] Read more.
Recent advances in artificial intelligence have made power efficiency a primary objective in system design. In this context, stochastic computing (SC), which processes probabilistic bitstreams using simple logic, and spiking neural networks (SNNs), a neuromorphic paradigm, have gained prominence as alternative approaches. This study proposes a Stochastic Computing Neural Network (SC-NN) framework that minimizes the intrinsic errors of stochastic computing and leverages the isomorphism between one-count operations on bitstreams and spike-rate computations in spiking neural networks, yielding improvements in accuracy and hardware efficiency. In contrast to earlier studies that utilized independent random number sequences of 10 bits or higher, our study employed a practically implementable 8-bit linear feedback shift Register (LFSR)-based pseudo-random bitstream. Using 4 taps and 255 seeds improves the realism of the hardware. Despite the inherent accuracy ceiling of pseudo-random sequences, the proposed method achieves higher accuracy. Applied to an 8-bit SC-based neural network accelerator, the proposed design improves accuracy by 35% over a conventional FSM baseline, while reducing power and area by 43.8% and 17.2%, respectively, and decreasing delay by 5.5%. These improvements translate to a 2.3× enhancement in the Figure of Merit (FoM), which was further verified through physical layout and FPGA results. Overall, this work introduces a new paradigm that enables simultaneous gains in accuracy and efficiency for low-power AI by suppressing the error sources and embedding the structural similarity between SNNs and SC into the design. Full article
(This article belongs to the Special Issue Design of Low-Power Circuits and Systems)
Show Figures

Figure 1

18 pages, 2686 KB  
Article
MRI-Based Bladder Cancer Staging via YOLOv11 Segmentation and Deep Learning Classification
by Phisit Katongtung, Kanokwatt Shiangjen, Watcharaporn Cholamjiak and Krittin Naravejsakul
Diseases 2026, 14(2), 45; https://doi.org/10.3390/diseases14020045 - 28 Jan 2026
Viewed by 690
Abstract
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep [...] Read more.
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep learning framework for MRI-based bladder cancer staging to support standardized radio-logical interpretation. Methods: A sequential AI-based pipeline was developed, integrating hybrid tumor segmentation using YOLOv11 for lesion detection and DeepLabV3 for boundary refinement, followed by three deep learning classifiers (VGG19, ResNet50, and Vision Transformer) for MRI-based stage prediction. A total of 416 T2-weighted MRI images with radiology-derived stage labels (T1–T4) were included, with data augmentation applied during training. Model performance was evaluated using accuracy, precision, recall, F1-score, and multi-class AUC. Performance un-certainty was characterized using patient-level bootstrap confidence intervals under a fixed training and evaluation pipeline. Results: All evaluated models demonstrated high and broadly comparable discriminative performance for MRI-based bladder cancer staging within the present dataset, with high point estimates of accuracy and AUC, particularly for differentiating non–muscle-invasive from muscle-invasive disease. Calibration analysis characterized the probabilistic behavior of predicted stage probabilities under the current experimental setting. Conclusions: The proposed framework demonstrates the feasibility of automated MRI-based bladder cancer staging derived from radiological reference labels and supports the potential of deep learning for stand-ardizing and reproducing MRI-based staging procedures. Rather than serving as an independent clinical decision-support system, the framework is intended as a methodological and work-flow-oriented tool for automated staging consistency. Further validation using multi-center datasets, patient-level data splitting prior to augmentation, pathology-confirmed reference stand-ards, and explainable AI techniques is required to establish generalizability and clinical relevance. Full article
Show Figures

Figure 1

22 pages, 1269 KB  
Article
Probabilistic Power Flow Estimation in Power Grids Considering Generator Frequency Regulation Constraints Based on Unscented Transformation
by Jianghong Chen and Yuanyuan Miao
Energies 2026, 19(2), 301; https://doi.org/10.3390/en19020301 - 7 Jan 2026
Viewed by 364
Abstract
To address active power fluctuations in power grids induced by high renewable energy penetration and overcome the limitations of existing probabilistic power flow (PPF) methods that ignore generator frequency regulation constraints, this paper proposes a segmented stochastic power flow modeling method and an [...] Read more.
To address active power fluctuations in power grids induced by high renewable energy penetration and overcome the limitations of existing probabilistic power flow (PPF) methods that ignore generator frequency regulation constraints, this paper proposes a segmented stochastic power flow modeling method and an efficient analytical framework that incorporates the actions and capacity constraints of regulation units. Firstly, a dual dynamic piecewise linear power injection model is established based on “frequency deviation interval stratification and unit limit-reaching sequence ordering,” clarifying the hierarchical activation sequence of “loads first, followed by conventional units, and finally automatic generation control (AGC) units” along with the coupled adjustment logic upon reaching limits, thereby accurately reflecting the actual frequency regulation process. Subsequently, this model is integrated with the State-Independent Linearized Power Flow (DLPF) model to develop a segmented stochastic power flow framework. For the first time, a deep integration of unscented transformation (UT) and regulation-aware power allocation is achieved, coupled with the Nataf transformation to handle correlations among random variables, forming an analytical framework that balances accuracy and computational efficiency. Case studies on the New England 39-bus system demonstrate that the proposed method yields results highly consistent with those of Monte Carlo simulations while significantly enhancing computational efficiency. The DLPF model is validated to be applicable under scenarios where voltage remains within 0.95–1.05 p.u., and line transmission power does not exceed 85% of rated capacity, exhibiting strong robustness against parameter fluctuations and capacity variations. Furthermore, the method reveals voltage distribution patterns in wind-integrated power systems, providing reliable support for operational risk assessment in grids with high shares of renewable energy. Full article
Show Figures

Figure 1

21 pages, 857 KB  
Article
Safety Assessment of Fuze Based on T-S Fuzzy Fault Tree and Interval Triangular Fuzzy Multi-State Bayesian Network
by Xue Wang, Ya Zhang, Shizhong Li and Bo Li
Machines 2026, 14(1), 14; https://doi.org/10.3390/machines14010014 - 21 Dec 2025
Cited by 1 | Viewed by 508
Abstract
In response to the relevant provisions of safety design criteria for fuze, and considering that Traditional Fault Tree Analysis (TFTA) struggles to describe system failure behavior, such as in its multi-state system faults and probabilistic logic linkages among components, this paper proposed a [...] Read more.
In response to the relevant provisions of safety design criteria for fuze, and considering that Traditional Fault Tree Analysis (TFTA) struggles to describe system failure behavior, such as in its multi-state system faults and probabilistic logic linkages among components, this paper proposed a method for analyzing fuze system failure based on the integration of T-S Fuzzy Fault Tree (T-SFFT) and Bayesian Network (BN), introducing an interval triangular fuzzy subset method for describing failure rates in the safety assessment of the fuze system. Taking the fault tree of the fuze function prior to the initiation of the ordained arming and safety-interruption sequence as an example, using this approach, the analysis and calculation results indicated that the fuzzy subsets of failure probability for the top event under the complete failure state of the fuze system were of the same order of magnitude as those obtained using the TFTA method. This therefore validated the feasibility and effectiveness of this method in fuze system safety assessment. Furthermore, using BN to obtain the posterior probabilities of nodes, this approach provided a data foundation for fuze system fault diagnosis, holding significant engineering significance for fuze system safety assessment. Full article
(This article belongs to the Special Issue Reliability in Mechanical Systems: Innovations and Applications)
Show Figures

Figure 1

20 pages, 9502 KB  
Article
Meta-Path-Based Probabilistic Soft Logic for Drug–Target Interaction Predictions
by Shengming Zhang and Yizhou Sun
Mathematics 2025, 13(24), 3958; https://doi.org/10.3390/math13243958 - 12 Dec 2025
Viewed by 610
Abstract
Drug–target interaction (DTI) predictions, which aim to predict whether a drug will be bounded to a target, have received wide attention recently. The goal is to automate and accelerate the costly process of drug design. Most of the recently proposed methods use single [...] Read more.
Drug–target interaction (DTI) predictions, which aim to predict whether a drug will be bounded to a target, have received wide attention recently. The goal is to automate and accelerate the costly process of drug design. Most of the recently proposed methods use single drug–drug similarity and target–target similarity information for DTI predictions; thus, they are unable to take advantage of the abundant information regarding the various types of similarities between these two types of information. Very recently, some methods have been proposed to leverage multi-similarity information; however, they still lack the ability to take into consideration the rich topological information of all sorts of knowledge bases in which the drugs and targets reside. Furthermore, the high computational cost of these approaches limits their scalability to large-scale networks. To address these challenges, we propose a novel approach named summated meta-path-based probabilistic soft logic (SMPSL). Unlike the original PSL framework, which often overlooks the quantitative path frequency, SMPSL explicitly captures crucial meta-path count information. By integrating summated meta-path counts into the PSL framework, our method not only significantly reduces the computational overhead, but also effectively models the heterogeneity of the network for robust DTI predictions. We evaluated SMPSL against five robust baselines on three public datasets. The experimental results demonstrate that our approach outperformed all of the baselines in terms of the AUPR and AUC scores. Full article
Show Figures

Figure 1

Back to TopTop