Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,675)

Search Parameters:
Keywords = probability distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 516 KB  
Article
On Return Probabilities of Adverse Events Under Dependence and Lessons to Learn for Decision-Making
by Marius Hofert
Risks 2026, 14(3), 58; https://doi.org/10.3390/risks14030058 - 5 Mar 2026
Abstract
Considering achieving a goal in each of several time intervals when, in every time interval, an adverse event may lead to a failure raises the question of the return probability of adverse events, so the probability of at least one failure to happen [...] Read more.
Considering achieving a goal in each of several time intervals when, in every time interval, an adverse event may lead to a failure raises the question of the return probability of adverse events, so the probability of at least one failure to happen during the time period of interest. Through basic mathematical arguments in tractable cases, we investigate the behavior of the return probability of adverse events in various setups. In the univariate case, we consider the independent and identically distributed setup, the independent setup, the dependent but not necessarily identically distributed setup, and the dependent and identically distributed setup. In the multivariate case, we consider several goals to be achieved in each time period. Besides different setups for the marginal failure probabilities, we study dependence in terms of comonotone blocks and independent blocks and via nested copulas. In case closed form expressions are not available, we derive bounds on the return probability of at least one failure. Our results are interpretable in terms of decision-making, provide insight into what affects such return probabilities and thus may help to develop strategies to lower them. Full article
Show Figures

Figure 1

29 pages, 3905 KB  
Article
CS-MLAkNN: A Cost-Sensitive Adaptive k-Nearest Neighbors Algorithm for Imbalanced Multi-Label Learning
by Zhengyao Shen, Jicong Duan, Ying Wang and Hualong Yu
Symmetry 2026, 18(3), 448; https://doi.org/10.3390/sym18030448 - 5 Mar 2026
Abstract
Multi-label data usually carries a complex structural class imbalance, which significantly affects the overall predictive performance of multi-label learning models. Although many studies have investigated this problem, most existing methods rely on resampling, static cost weighting, or ensemble learning. Few studies simultaneously consider [...] Read more.
Multi-label data usually carries a complex structural class imbalance, which significantly affects the overall predictive performance of multi-label learning models. Although many studies have investigated this problem, most existing methods rely on resampling, static cost weighting, or ensemble learning. Few studies simultaneously consider cost information and neighborhood size within the local statistical model of ML-kNN. To address this issue, this paper proposes a cost-sensitive adaptive k-nearest neighbors algorithm, named CS-MLAkNN, for imbalanced multi-label learning. The algorithm implements a dual cost-sensitive strategy at both the feature and label levels within the ML-kNN framework. Specifically, feature-level cost sensitivity is achieved through distance weighting during the training phase. In the prediction phase, label distribution information is incorporated into the posterior probability calculation to achieve label-level cost sensitivity. Moreover, the optimal number of neighbors (k) is determined adaptively through cross-validation. CS-MLAkNN maintains the simplicity and interpretability of the original ML-kNN, and meanwhile it explicitly introduces cost sensitivity and adaptiveness into three key steps: distance metric, posterior decision, and neighbor determination. Experimental results on 14 benchmark datasets demonstrate that the proposed method achieves optimal or near-optimal performance across various evaluation metrics. It also shows significant advantages over other state-of-the-art imbalanced multi-label learning algorithms. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Symmetry/Asymmetry)
Show Figures

Figure 1

26 pages, 4251 KB  
Article
Reliability-Aware Robust Optimization for Multi-Type Sensor Placement Under Sensor Failures
by Shenghuan Zeng, Ding Luo, Pujingru Yan, Naiwei Lu, Ke Huang and Lei Wang
Buildings 2026, 16(5), 1024; https://doi.org/10.3390/buildings16051024 - 5 Mar 2026
Abstract
In the field of structural health monitoring systems, sensors serve as the fundamental components for assessing infrastructure integrity. The rationality of their spatial configuration significantly influences the precision of structural performance assessment, the efficacy of damage detection algorithms, and the operational reliability of [...] Read more.
In the field of structural health monitoring systems, sensors serve as the fundamental components for assessing infrastructure integrity. The rationality of their spatial configuration significantly influences the precision of structural performance assessment, the efficacy of damage detection algorithms, and the operational reliability of the system throughout its designated lifecycle. A robust optimization methodology for the placement of multi-type sensors is proposed in this study, explicitly formulated to mitigate the negative impact of sensor malfunctions during long-term operation. First, a rigorous evaluation framework for sensor placement schemes is established based on Bayesian inference and the minimization of information entropy, thereby quantifying the uncertainty inherent in parameter identification. Then, a probabilistic model of sensor failure is developed utilizing the Weibull distribution to capture time-dependent reliability characteristics, combined with a modified information entropy calculation method that mathematically assimilates these failure probabilities into the optimization objective. Finally, a heuristic search strategy is employed to achieve the robust optimal placement of multi-type sensors, efficiently navigating the complex combinatorial search space. In contrast to deterministic information entropy (DIE) methodologies, which assume ideal sensor functionality, the robust information entropy (RIE) approach comprehensively accounts for the stochastic nature of sensor failures and their impact on the information content of the monitoring network, thereby significantly augmenting the robustness and redundancy of the sensor configuration. Validations utilizing a numerical frame structure and a finite element bridge model demonstrate that the RIE method effectively integrates the sensor failure probability model to yield robust optimal placement schemes, minimizing the risk of information loss and ensuring reliable structural health monitoring throughout the engineering lifecycle. Full article
Show Figures

Figure 1

20 pages, 4709 KB  
Article
Low-Contrast Coating Surface Microcrack Detection Using an Improved U-Net Network Based on Probability Map Fusion
by Junwen Xue, Wuzhi Chen, Shida Zhang, Xukun Yang, Keji Pang, Jiaojiao Ren, Lijuan Li and Haiyan Li
Sensors 2026, 26(5), 1629; https://doi.org/10.3390/s26051629 - 5 Mar 2026
Abstract
To address challenges such as low contrast, complex backgrounds, and discontinuous crack distribution in coating surface microcrack detection, a detection method combining circular neighborhood features with an improved U-net is proposed. In the preprocessing stage, a background template is constructed via median filtering, [...] Read more.
To address challenges such as low contrast, complex backgrounds, and discontinuous crack distribution in coating surface microcrack detection, a detection method combining circular neighborhood features with an improved U-net is proposed. In the preprocessing stage, a background template is constructed via median filtering, and crack contrast is enhanced through a combination of difference operations and Gaussian smoothing. Based on the spatial aggregation and directionality of crack pixels, multi-scale and multi-directional circular scanning filters were constructed to generate neighborhood difference maps for quantifying the crack distribution probability. The ImF-Att-DO-U-net was designed by utilizing a dual-channel input consisting of the original image and the crack probability map. The encoder embeds lightweight CBAMs to strengthen crack features, while the decoder introduces DO-Conv and Leaky ReLU to enhance detail capture capabilities. A hybrid loss function combining Binary Cross-Entropy and Dice loss was employed to optimize class imbalance. Algorithm testing results demonstrate that the proposed method achieved a Dice coefficient of 0.884, an SSIM of 0.893, and an accuracy of 0.911, outperforming comparative models such as DO-U-net. The extraction rate for cracks ≥10 μm reached 98%, with a minimum detectable crack size at the 7 μm level. The method exhibited excellent robustness under noise and blur testing, demonstrating superior environmental adaptability. Full article
Show Figures

Figure 1

18 pages, 7000 KB  
Article
Long-Term Hydrodynamic Evolution and Extreme Parameter Estimation in the Mekong River Estuary
by Xuanjun Huang, Bin Wang, Yongqing Lai, Jiawei Yu and Yujia Tang
Water 2026, 18(5), 620; https://doi.org/10.3390/w18050620 - 5 Mar 2026
Abstract
Tropical estuarine hydrodynamic processes are governed by complex interactions between tides, monsoons, and fluvial runoff. To obtain long-term (≥30 years) hydrodynamic conditions of the Mekong River Estuary, this study established a Finite Volume Coastal Ocean Model (FVCOM) coupled with validated Weather Research and [...] Read more.
Tropical estuarine hydrodynamic processes are governed by complex interactions between tides, monsoons, and fluvial runoff. To obtain long-term (≥30 years) hydrodynamic conditions of the Mekong River Estuary, this study established a Finite Volume Coastal Ocean Model (FVCOM) coupled with validated Weather Research and Forecast (WRF) wind forcing for a 32-year (1988–2019) high-resolution simulation. Validation against in situ observations confirms the model’s robustness. Temporal–spatial patterns of water level and current were analyzed, and extreme parameters for 1–100 year return periods were derived via the Pearson-III probability distribution. Results indicate the study area is a mesotidal environment (tidal range = 3.58 m) dominated by SSE-NNW reciprocating tidal currents. Relative to Vietnam’s national elevation datum, 100-year return period extreme high/low water levels are 2.15 m and −2.03 m, with a maximum storm surge setup of 2.09 m. The 100-year return period maximum current velocity reaches 4.58 m/s (A21 station), and Mekong River runoff exerts a negligible influence (<5% velocity change). This study provides high-precision baseline data for offshore wind farm engineering and disaster risk assessment, offering a methodological reference for tropical estuarine hydrodynamic simulations. Full article
(This article belongs to the Special Issue Hydrology and Hydrodynamics Characteristics in Coastal Area)
Show Figures

Figure 1

64 pages, 1642 KB  
Article
Asymptotic Theory for Multivariate Nonparametric Quantile Regression with Stationary Ergodic Functional Covariates and Missing-at-Random Responses
by Hadjer Belhas, Mustapha Mohammedi and Salim Bouzebda
Symmetry 2026, 18(3), 445; https://doi.org/10.3390/sym18030445 - 4 Mar 2026
Abstract
Quantiles are among the most fundamental constructs in probability theory and statistics, intrinsically linked to order structures, stochastic dominance, and the principles of robust statistical inference. Although the univariate theory of quantiles is by now classical and well developed, their generalization to multivariate [...] Read more.
Quantiles are among the most fundamental constructs in probability theory and statistics, intrinsically linked to order structures, stochastic dominance, and the principles of robust statistical inference. Although the univariate theory of quantiles is by now classical and well developed, their generalization to multivariate settings remains mathematically subtle and methodologically demanding. In particular, extending the notion of “location within a distribution” beyond one dimension raises delicate questions of geometry, ordering, and equivariance. Within this landscape, the spatial—or geometric—formulation of multivariate quantiles has emerged as a rigorous and conceptually unifying framework capable of reconciling these issues. In this work we advance this paradigm by introducing a kernel-based estimation procedure for nonparametric conditional geometric quantiles of a multivariate response YRq (q2) given a functional covariate X that takes values in an infinite-dimensional space. The data are assumed to form a strictly stationary and ergodic process, while the responses may be subject to a missing-at-random mechanism, a feature of substantial practical relevance. Our analysis establishes strong consistency of the proposed estimator, characterizes its optimal convergence rate, and derives its asymptotic distribution. These limit theorems, in turn, provide the theoretical foundation for constructing asymptotically valid confidence regions and for performing inference in multivariate quantile regression with functional covariates. The theoretical developments rest on natural complexity conditions for the involved functional classes together with mild smoothness and regularity assumptions. This balance between generality and mathematical precision ensures that the resulting methodology is not only robust in a rigorous probabilistic sense but also widely applicable to contemporary problems in high-dimensional and functional data analysis. The proposed methodology is numerically investigated through simulations and is implemented in a real data application. Full article
23 pages, 5979 KB  
Article
Physics-Informed Graph Attention Network with Topology Masking for Probabilistic Load Forecasting in Active Distribution Networks
by Wenting Lei, Weifeng Peng, Chenxi Dai and Shufeng Dong
Energies 2026, 19(5), 1294; https://doi.org/10.3390/en19051294 - 4 Mar 2026
Abstract
The integration of distributed photovoltaics (PV) introduces time-varying electrical coupling in active distribution networks, limiting the efficacy of conventional forecasting methods that rely on incomplete topological information and static physical models. This paper proposes a physics-informed spatio-temporal graph attention network (PI-STGAT) for probabilistic [...] Read more.
The integration of distributed photovoltaics (PV) introduces time-varying electrical coupling in active distribution networks, limiting the efficacy of conventional forecasting methods that rely on incomplete topological information and static physical models. This paper proposes a physics-informed spatio-temporal graph attention network (PI-STGAT) for probabilistic load forecasting under highly fluctuating conditions. A condition-adaptive correlation blending mechanism, derived from voltage–power sensitivity principles, fuses physical priors with statistical correlations using a PV-weighted strategy to capture time-varying electrical connectivity. An impedance-weighted continuous physical gating architecture maps voltage correlation coefficients into continuous attention biases, reflecting the spatial continuity of electrical distances while suppressing long-range noise. An uncertainty-aware adaptive physical constraint strategy dynamically modulates physical loss weights based on prediction variance and PV penetration, balancing fitting accuracy against physical consistency. Validation on real-world distribution network data demonstrates that, over a 24 h day-ahead horizon, PI-STGAT achieves a MAPE of 5.50%, a 3.7% relative reduction compared with LSTM. The model further attains a prediction interval coverage probability of 97.9%, confirming reliable uncertainty estimates under complex conditions. Full article
Show Figures

Figure 1

32 pages, 4390 KB  
Article
Predicting the Remaining Useful Life of Ship Shafting Using Bayesian Networks with Asymmetric Probability Distributions
by Peng Dong, Ge Han and Luwen Yuan
Symmetry 2026, 18(3), 443; https://doi.org/10.3390/sym18030443 - 4 Mar 2026
Abstract
Accurately predicting the remaining useful life (RUL) of ship shafting is crucial for ensuring navigation safety and optimizing operation and maintenance. Traditional Bayesian Network (BN) methods are usually based on the assumption of symmetric distributions. They struggle to effectively characterize common statistical properties [...] Read more.
Accurately predicting the remaining useful life (RUL) of ship shafting is crucial for ensuring navigation safety and optimizing operation and maintenance. Traditional Bayesian Network (BN) methods are usually based on the assumption of symmetric distributions. They struggle to effectively characterize common statistical properties such as asymmetry and heavy tails during the shafting degradation process, leading to biases in prediction results. To address this issue, this study proposes an Asymmetric Distribution Bayesian Network (ADBN) method. The method consists of three key components. Firstly, each node selects the optimal asymmetric distribution form based on the Bayesian Information Criterion (BIC) to better fit data characteristics. Secondly, a Generalized Linear Model (GLM) is used to associate distribution parameters (e.g., location, scale, shape) with parent node states, enabling the conditional distribution to adaptively evolve with the system degradation process. Finally, to tackle the complex inference problem under asymmetric distributions, an approximate algorithm based on stochastic gradient variational inference is designed to ensure prediction timeliness. Experimental results show that the ADBN method outperforms traditional Gaussian networks in terms of Mean Absolute Error in the early, middle, and late stages of RUL prediction, and can provide more accurate prediction intervals. This research offers a probabilistic approach that better aligns with actual statistical properties for modeling ship shafting degradation. Full article
(This article belongs to the Special Issue Symmetry in Fault Detection, Diagnosis, and Prognostics)
Show Figures

Figure 1

11 pages, 740 KB  
Article
Impact of a Second E-Reminder on Fecal Immunochemical Test Uptake in the Flemish Colorectal Cancer Screening Program: A Quasi-Experimental Study
by Sarah Hoeck and Thuy Ngan Tran
Gastrointest. Disord. 2026, 8(1), 14; https://doi.org/10.3390/gidisord8010014 - 4 Mar 2026
Abstract
Background: Flanders (Belgium) offers a fecal immunochemical test (FIT) biennially to citizens aged 50–74 years, but uptake is suboptimal (~50%). This study evaluated the impact of a second e-reminder on FIT uptake. Methods: We conducted a quasi-experimental study comparing FIT uptake [...] Read more.
Background: Flanders (Belgium) offers a fecal immunochemical test (FIT) biennially to citizens aged 50–74 years, but uptake is suboptimal (~50%). This study evaluated the impact of a second e-reminder on FIT uptake. Methods: We conducted a quasi-experimental study comparing FIT uptake in individuals who received a first e-reminder during June 2023–May 2024 and a second e-reminder five weeks later (intervention cohort) with those who received a first e-reminder in June 2021–May 2022 without a second reminder (historical control). The study outcome was FIT uptake within one year after the first e-reminder. Analyses were stratified by screening history (regular vs. irregular participants). Results: The study population consisted of 54,734 regular (27,522 control and 27,212 intervention); and 18,492 irregular participants (8565 control and 9927 intervention). Median age was slightly lower in the intervention group (regular: 57 vs. 59 years; irregular: 62 vs. 64 years). Gender distribution was balanced (≈50% men). Regular participants receiving a second e-reminder had 80% higher probability of participation than controls (OR 1.80; 95% CI 1.73–1.86; p < 0.0001); with uptake increasing from 29.5% to 43.7%. Irregular participants with a second e-reminder had a 91% higher probability of participation compared with no second e-reminder (OR 1.91; 95% CI 1.74–2.09; p < 0.0001), with uptake increasing from 9.4% to 18.4%. Conclusions: A second e-reminder significantly increased FIT uptake among both regular and irregular participants in the Flemish colorectal cancer screening program. These findings support its use as a low-cost strategy to improve population-level screening participation. Full article
(This article belongs to the Special Issue Feature Papers in Gastrointestinal Disorders in 2025–2026)
Show Figures

Figure 1

28 pages, 2621 KB  
Article
A Bilevel Multi-Market Coupling Optimization Framework for Nuclear Power Integration: Joint Modeling of Energy, Reserve, and Capacity Markets
by Peng Ji, Yiman Liu, Nan Li and Zhongfu Tan
Energies 2026, 19(5), 1276; https://doi.org/10.3390/en19051276 - 4 Mar 2026
Abstract
This paper develops a bilevel multi-market coupling optimization framework to analyze the strategic participation of nuclear power plants in modern electricity systems where energy, reserve, and capacity markets are simultaneously cleared. The upper-level problem represents the Independent System Operator’s objective of maximizing system-wide [...] Read more.
This paper develops a bilevel multi-market coupling optimization framework to analyze the strategic participation of nuclear power plants in modern electricity systems where energy, reserve, and capacity markets are simultaneously cleared. The upper-level problem represents the Independent System Operator’s objective of maximizing system-wide social welfare under network, reserve, and carbon-cap constraints, while the lower-level problem captures the nuclear operator’s profit maximization subject to ramping limits, minimum uptime requirements, fuel-cycle depletion, and deliverability restrictions. By embedding these technical constraints into a bilevel structure reformulated through tractable complementarity conditions, the model captures the interdependence of nuclear scheduling, reserve deployment, capacity commitments, and carbon compliance in a single optimization environment. The proposed framework is applied to a stylized but realistic case study with 96-h time resolution, 12-bus network topology, and detailed representations of wind variability, demand elasticity, and emission caps. The model quantifies how nuclear participation displaces 40,000 tCO2 over the horizon, raises producer surplus by 12 percent, and increases total social welfare by nearly 18 percent when all three markets are coupled, relative to an energy-only benchmark. Nuclear profitability is shown to be highly sensitive to renewable volatility, with ±20 percent swings in wind uncertainty altering profits by 0.24 million USD. Reserve deliverability emerges as the second most influential driver, while policy variables such as carbon price shifts play a smaller role. Reliability analysis based on the complementary cumulative distribution of unserved energy demonstrates that joint market clearing reduces the probability of load shedding at the 0.5 percent threshold from 8 percent in energy-only markets to 2 percent under full coupling. Overall, the study provides the first integrated modeling treatment of nuclear bidding across energy, reserve, and capacity markets within a bilevel optimization framework. By jointly considering operational constraints and policy targets, the framework reveals how nuclear power can simultaneously improve economic efficiency, enhance system reliability, and support carbon mitigation. The results highlight that nuclear’s value extends well beyond baseload energy provision, with multi-market strategies offering measurable gains for both individual operators and social welfare under conditions of uncertainty and constraint. Full article
Show Figures

Figure 1

13 pages, 368 KB  
Article
Tree-Based Machine Learning Intermittent Demand Forecasting for Spare Parts in Electric Vehicle Manufacturing
by Wenhan Fu, Haolin Bian, Junfei Chen and Sheng Jing
World Electr. Veh. J. 2026, 17(3), 127; https://doi.org/10.3390/wevj17030127 - 3 Mar 2026
Viewed by 57
Abstract
As a crucial pillar industry in the country, the automotive industry continues to evolve with the increasing number of vehicles in operation, leading to a continual rise in the need for aftermarket parts and repair services. Fluctuations in automotive spare part requirements are [...] Read more.
As a crucial pillar industry in the country, the automotive industry continues to evolve with the increasing number of vehicles in operation, leading to a continual rise in the need for aftermarket parts and repair services. Fluctuations in automotive spare part requirements are influenced by various complex factors, which significantly impact production costs. The intermittent distribution of such requirements and strict limitations highlights the importance of automotive spare part management to enhance production efficiency and reduce costs. To improve demand forecasting accuracy, this study summarizes and synthesizes trends in automotive spare parts; proposes a tree-based machine learning forecasting model, based on a two-stage random forest (RF) structure that separately models demand occurrence probability and conditional demand size; and compares the outcomes with benchmarks to validate model effectiveness. The empirical study is conducted using an industrial dataset consisting of monthly demand records for approximately 2500 spare parts over a four-year period. This forecasting approach enables companies to rationalize inventory storage, ensure the quality of automotive repairs, and elevate service standards. Simultaneously, by improving the efficiency of inventory planning and allocation decisions, companies can enhance the quality of after-sales services, reduce inventory costs, and maximize the value of the automotive industry chain. Through reducing spare parts wastage and further lowering enterprise costs and industrial emissions, companies can achieve the goals of automotive supply chain resilience. Notably, this study focuses on automotive spare parts management and provides a feasible, reliable, and interpretable forecasting solution for automotive manufacturers to address intermittent demand challenges in spare parts management. Full article
(This article belongs to the Section Manufacturing)
Show Figures

Figure 1

21 pages, 2858 KB  
Article
Generation of Distances Between Feature Vectors Derived from a Siamese Neural Network for Continuous Authentication
by Sergey Davydenko, Pavel Laptev and Evgeny Kostyuchenko
J. Cybersecur. Priv. 2026, 6(2), 45; https://doi.org/10.3390/jcp6020045 - 3 Mar 2026
Viewed by 50
Abstract
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they [...] Read more.
Continuous authentication is a promising method for protecting computer systems in the event of compromise of primary authentication factors, such as passwords or tokens. Systems employing continuous authentication that rely on biometrics may not be restricted to a single biometric characteristic; rather, they can simultaneously utilize multiple characteristics and subsequently arrive at a conclusive decision based on their collective analysis outcomes. One of the significant challenges researchers encounter when investigating effective fusion in decision-making is the lack of data. At present, data generation primarily involves the creation of feature vectors or attack simulation. This paper introduces a method for directly generating distances derived from a Siamese neural network, utilizing the probability density function of an existing distribution. Through statistical analysis, we successfully generated 5000 samples that correspond to the initial distribution, which were then employed to discover the threshold values at which FAR and FRR were less than 1%. The methods developed can be further applied to identify the most efficient strategies for integrating the results of continuous authentication in systems that incorporate multiple biometric characteristics. Full article
(This article belongs to the Special Issue Cyber Security and Digital Forensics—3rd Edition)
Show Figures

Figure 1

27 pages, 9176 KB  
Article
Multi-Objective Topological Optimization of 3D Multi-Material Structures Using the SESO Method with FORM
by Márcio Maciel da Silva, Hélio Luiz Simonetti, Francisco de Assis das Neves and Marcílio Sousa da Rocha Freitas
Buildings 2026, 16(5), 981; https://doi.org/10.3390/buildings16050981 - 2 Mar 2026
Viewed by 109
Abstract
Topological optimization has established itself as an efficient tool for the design of highly complex structures and the rational use of materials, especially in problems involving multiple constraints and conflicting objectives. This work presents a new multi-material topological optimization approach based on the [...] Read more.
Topological optimization has established itself as an efficient tool for the design of highly complex structures and the rational use of materials, especially in problems involving multiple constraints and conflicting objectives. This work presents a new multi-material topological optimization approach based on the ESO smoothing method (SESO), formulated as a multi-objective optimization problem in a MATLAB R2021a environment. The multi-objective formulation simultaneously considers the minimization of the maximum von Mises equivalent stress (or minimum principal stress) and the maximum displacement, which are fundamental criteria for structural engineering design. The proposed methodology also incorporates a reliability analysis using the First-Order Reliability Method (FORM), modeling uncertainties associated with the applied force, volume fraction, and modulus of elasticity through normal and lognormal probability distributions, with a target reliability index of βtarget=3.0. The consistency of the reliability analysis was evaluated using Monte Carlo simulations, validating the reliability indices obtained via FORM. The approach was applied to two classical three-dimensional numerical examples: a cantilever beam under base and center loads and an MBB beam, considering two widely used engineering materials, steel and concrete. The results indicate improved multi-material distribution in the design domain and greater structural robustness against unfavorable loading planes, variations in the modulus of elasticity, and volume constraints imposed by FORM. Furthermore, the minimum yield stress of steel (σymin) and the compressive strength of concrete (fckmin) were calibrated, representing the minimum material strengths required to resist the maximum von Mises stress in steel and the minimum principal stress (σ3) in concrete, ensuring the target reliability index is achieved. This method, thus, highlights the integration of SESO with multi-material, multi-objective, and reliability-based optimization as a consistent, robust, and practically relevant strategy with potential for future applications in structural engineering projects. Full article
Show Figures

Figure 1

22 pages, 3968 KB  
Article
Research on Gas Turbine Data Scaling Technology Based on Temperature-Gradient-Guided Dynamic Genetic Optimization Sampling Algorithm
by Yang Liu, Yongbao Liu and Yuhao Jia
Processes 2026, 14(5), 818; https://doi.org/10.3390/pr14050818 - 2 Mar 2026
Viewed by 114
Abstract
Gas turbines play a critical role in modern power systems, yet their transient operations (e.g., start-up, load mutation) induce significant thermal inertia in metal components, leading to deviations between simulation results and actual performance. Traditional low-dimensional (1D/0D) simulation models sacrifice detailed flow and [...] Read more.
Gas turbines play a critical role in modern power systems, yet their transient operations (e.g., start-up, load mutation) induce significant thermal inertia in metal components, leading to deviations between simulation results and actual performance. Traditional low-dimensional (1D/0D) simulation models sacrifice detailed flow and temperature field information to reduce computational load, while high-dimensional (3D) computational fluid dynamics (CFD) models are impractical for full-system simulations due to excessive computational costs. This discrepancy creates a critical trade-off between simulation accuracy and efficiency in gas turbine thermal inertia studies. To address this challenge, this study proposes a temperature-gradient-guided dynamic genetic optimization sampling algorithm (TDGA) and integrates it into a multi-dimensional data scaling framework for gas turbines. A fully coupled simulation framework was established, combining 3D CFD models for turbine flow paths (resolving detailed flow and temperature fields) and 1D thermal models for metal components (casing, hub, blades). The TDGA was designed to enable efficient data interoperability between models: it incorporates a dynamic encoding mechanism, temperature gradient weight matrix, density penalty term, quantity penalty term, and regularization term to optimize sampling point distribution. Dynamic weight coefficients for each objective function term and adaptive crossover/mutation probabilities were introduced to balance global exploration (early iterations) and local exploitation (late iterations) during optimization. Comparative analysis showed that the TDGA achieved a mean squared error (MSE) of 15.52K, far lower than those of traditional Latin Hypercube Sampling (75.07K) and Bootstrap Sampling (64.38K). It allocated 70.11% of sampling points to high-temperature gradient regions while reducing the total number of sampling points to 2765. During the middle stage of the gas turbine start-up process, compared with the traditional Latin Hypercube Sampling and Bootstrap Sampling, the average error of the proposed sampling algorithm is reduced by 17.4% and 13.3%, respectively. The proposed TDGA-based framework effectively balances simulation accuracy and computational efficiency, providing a reliable approach for the transient thermal analysis of gas turbines. Full article
(This article belongs to the Section AI-Enabled Process Engineering)
Show Figures

Figure 1

26 pages, 1294 KB  
Article
Anomaly Detection and Fault Diagnosis Based on Action States for Excavators
by Jaehyun Soh, Changmin Lee, Wonkyung Kim, Byungmun Kang and DaeEun Kim
Appl. Sci. 2026, 16(5), 2414; https://doi.org/10.3390/app16052414 - 2 Mar 2026
Viewed by 152
Abstract
Anomaly detection has been a challenging subject in many industrial fields. In industrial machinery such as hydraulic excavators, sensor data distributions are inherently multimodal because different operating conditions produce distinct sensor signatures, and conventional algorithms struggle to establish clear normal–abnormal boundaries when these [...] Read more.
Anomaly detection has been a challenging subject in many industrial fields. In industrial machinery such as hydraulic excavators, sensor data distributions are inherently multimodal because different operating conditions produce distinct sensor signatures, and conventional algorithms struggle to establish clear normal–abnormal boundaries when these conditions are mixed. We propose an action-state decomposition framework that partitions multimodal sensor data into homogeneous subsets based on discretized control inputs, thereby reducing the ambiguity of normal–abnormal boundaries by learning state-conditional distributions. The approach comprises a reactive method that evaluates each sample within its action state, and a history-based method that incorporates temporal context from previous action states. This decomposition is algorithm-agnostic and can improve detection performance across diverse anomaly detection algorithms. The framework is further extended to Bayesian fault diagnosis that identifies the root cause of failures using action-state-conditional detection probabilities. Experiments on simulated excavator data and two real-world benchmark datasets (UCI Hydraulic Systems and SKAB) demonstrate the generalizability of the proposed mode decomposition and provide insights into factors that may influence its effectiveness. The history-based method achieves a mean AUC of 0.89 across sensor fault types, outperforming all baseline algorithms, and the Bayesian fault diagnosis achieves 86.7% accuracy in identifying the root cause among six action fault types. For the proposed GMM-based methods, the decomposition also substantially reduces per-sample inference time by approximately 10× (from 8.68 μs to 0.75 μs), enabling real-time deployment in industrial settings. Full article
(This article belongs to the Special Issue Mechanical Fault Diagnosis and Signal Processing)
Show Figures

Figure 1

Back to TopTop