Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (85)

Search Parameters:
Keywords = probabilistic distribution class

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6751 KB  
Article
Health Risk Assessment of Groundwater in Cold Regions Based on Kernel Density Estimation–Trapezoidal Fuzzy Number–Monte Carlo Simulation Model: A Case Study of the Black Soil Region in Central Songnen Plain
by Jiani Li, Yu Wang, Jianmin Bian, Xiaoqing Sun and Xingrui Feng
Water 2025, 17(20), 2984; https://doi.org/10.3390/w17202984 - 16 Oct 2025
Viewed by 132
Abstract
The quality of groundwater, a crucial freshwater resource in cold regions, directly affects human health. This study used groundwater quality monitoring data collected in the central Songnen Plain in 2014 and 2022 as a case study. The improved DRASTICL model was used to [...] Read more.
The quality of groundwater, a crucial freshwater resource in cold regions, directly affects human health. This study used groundwater quality monitoring data collected in the central Songnen Plain in 2014 and 2022 as a case study. The improved DRASTICL model was used to assess the vulnerability index, while water quality indicators were selected using a random forest algorithm and combined with the entropy-weighted groundwater quality index (E-GQI) approach to realize water quality assessment. Furthermore, self-organizing maps (SOM) were used for pollutant source analysis. Finally, the study identified the synergistic migration mechanism of NH4+ and Cl, as well as the activation trend of As in reducing environments. The uncertainty inherent to health risk assessment was considered by developing a kernel density estimation–trapezoidal fuzzy number–Monte Carlo simulation (KDE-TFN-MCSS) model that reduced the distribution mis-specification risks and high-risk misjudgment rates associated with conventional assessment methods. The results indicated that: (1) The water chemistry type in the study area was predominantly HCO3–Ca2+ with moderately to weakly alkaline water, and the primary and nitrogen pollution indicators were elevated, with the average NH4+ concentration significantly increasing from 0.06 mg/L in 2014 to 1.26 mg/L in 2022, exceeding the Class III limit of 1.0 mg/L. (2) The groundwater quality in the central Songnen Plain was poor in 2014, comprising predominantly Classes IV and V; by 2022, it comprised mostly Classes I–IV following a banded distribution, but declined in some central and northern areas. (3) The results of the SOM analysis revealed that the principal hardness component shifted from Ca2+ in 2014 to Ca2+–Mg2+ synergy in 2022. Local high values of As and NH4+ were determined to reflect geogenic origin and diffuse agricultural pollution, whereas the Cl distribution reflected the influence of de-icing agents and urbanization. (4) Through drinking water exposure, a deterministic evaluation conducted using the conventional four-step method indicated that the non-carcinogenic risk (HI) in the central and eastern areas significantly exceeded the threshold (HI > 1) in 2014, with the high-HI area expanding westward to the central and western regions in 2022; local areas in the north also exhibited carcinogenic risk (CR) values exceeding the threshold (CR > 0.0001). The results of a probabilistic evaluation conducted using the proposed simulation model indicated that, except for children’s CR in 2022, both HI and CR exceeded acceptable thresholds with 95% probability. Therefore, the proposed assessment method can provide a basis for improved groundwater pollution zoning and control decisions in cold regions. Full article
(This article belongs to the Special Issue Soil and Groundwater Quality and Resources Assessment, 2nd Edition)
Show Figures

Figure 1

32 pages, 59314 KB  
Article
Tail-Calibrated Transformer Autoencoding with Prototype- Guided Mining for Open-World Object Detection
by Muhammad Ali Iqbal, Yeo-Chan Yoon and Soo Kyun Kim
Appl. Sci. 2025, 15(20), 10918; https://doi.org/10.3390/app152010918 - 11 Oct 2025
Viewed by 266
Abstract
Open-world object detection (OWOD) aims to build detectors that can recognize known categories while simultaneously identifying unknown objects and incrementally learning novel classes. Despite recent advances, existing OWOD approaches still struggle with two critical challenges: the severe bias toward head classes in long-tailed [...] Read more.
Open-world object detection (OWOD) aims to build detectors that can recognize known categories while simultaneously identifying unknown objects and incrementally learning novel classes. Despite recent advances, existing OWOD approaches still struggle with two critical challenges: the severe bias toward head classes in long-tailed data distributions and the misclassification of unknown objects as background. To address these issues, we introduce TAPM (Tail-Calibrated Transformer Autoencoding with Prototype-Guided Mining), a novel framework that explicitly enhances tail-class representation and robustly reveals unknown objects. TAPM integrates three core innovations: (1) a transformer-based autoencoder that reconstructs region features to calibrate embeddings for rare categories, mitigating the dominance of frequent classes; (2) a prototype-guided mining strategy that leverages class prototypes to localize both overlooked tail instances and candidate unknowns; and (3) an uncertainty-aware soft-labeling mechanism that assigns probabilistic supervision to pseudo-unknowns, reducing noise in incremental learning. Extensive experiments on the MS-COCO and LVIS benchmarks demonstrate that TAPM significantly improves unknown-object recall while maintaining strong known-class accuracy, achieving state-of-the-art performance across both the superclass-separated (S-OWODB) and superclass-mixed (M-OWODB) benchmarks. In particular, TAPM achieves a +20.4-point gain in U-Recall over the strong PROB baseline, underscoring its effectiveness in detecting novel objects without sacrificing mean Average Precision (mAP). Furthermore, TAPM achieves better generalization on cross-dataset evaluations, highlighting its robustness in diverse open-world scenarios. Full article
Show Figures

Figure 1

21 pages, 1618 KB  
Article
Towards Realistic Virtual Power Plant Operation: Behavioral Uncertainty Modeling and Robust Dispatch Through Prospect Theory and Social Network-Driven Scenario Design
by Yi Lu, Ziteng Liu, Shanna Luo, Jianli Zhao, Changbin Hu and Kun Shi
Sustainability 2025, 17(19), 8736; https://doi.org/10.3390/su17198736 - 29 Sep 2025
Viewed by 270
Abstract
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In [...] Read more.
The growing complexity of distribution-level virtual power plants (VPPs) demands a rethinking of how flexible demand is modeled, aggregated, and dispatched under uncertainty. Traditional optimization frameworks often rely on deterministic or homogeneous assumptions about end-user behavior, thereby overestimating controllability and underestimating risk. In this paper, we propose a behavior-aware, two-stage stochastic dispatch framework for VPPs that explicitly models heterogeneous user participation via integrated behavioral economics and social interaction structures. At the behavioral layer, user responses to demand response (DR) incentives are captured using a Prospect Theory-based utility function, parameterized by loss aversion, nonlinear gain perception, and subjective probability weighting. In parallel, social influence dynamics are modeled using a peer interaction network that modulates individual participation probabilities through local contagion effects. These two mechanisms are combined to produce a high-dimensional, time-varying participation map across user classes, including residential, commercial, and industrial actors. This probabilistic behavioral landscape is embedded within a scenario-based two-stage stochastic optimization model. The first stage determines pre-committed dispatch quantities across flexible loads, electric vehicles, and distributed storage systems, while the second stage executes real-time recourse based on realized participation trajectories. The dispatch model includes physical constraints (e.g., energy balance, network limits), behavioral fatigue, and the intertemporal coupling of flexible resources. A scenario reduction technique and the Conditional Value-at-Risk (CVaR) metric are used to ensure computational tractability and robustness against extreme behavior deviations. Full article
Show Figures

Figure 1

20 pages, 12343 KB  
Article
Geographical Origin Identification of Dendrobium officinale Using Variational Inference-Enhanced Deep Learning
by Changqing Liu, Fan Cao, Yifeng Diao, Yan He and Shuting Cai
Foods 2025, 14(19), 3361; https://doi.org/10.3390/foods14193361 - 28 Sep 2025
Viewed by 254
Abstract
Dendrobium officinale is an important medicinal and edible plant in China, widely used in the dietary health industry and pharmaceutical field. Due to the different geographical origins and cultivation methods, the nutritional value, medicinal quality, and price of Dendrobium are significantly different, and [...] Read more.
Dendrobium officinale is an important medicinal and edible plant in China, widely used in the dietary health industry and pharmaceutical field. Due to the different geographical origins and cultivation methods, the nutritional value, medicinal quality, and price of Dendrobium are significantly different, and accurate identification of the origin is crucial. Current origin identification relies on expert judgment or requires costly instruments, lacking an efficient solution. This study proposes a Variational Inference-enabled Data-Efficient Learning (VIDE) model for high-precision, non-destructive origin identification using a small number of image samples. VIDE integrates dual probabilistic networks: a prior network generating latent feature prototypes and a posterior network employing variational inference to model feature distributions via mean and variance estimators. This synergistic design enhances intra-class feature diversity while maximizing inter-class separability, achieving robust classification with limited samples. Experiments on a self-built dataset of Dendrobium officinale samples from six major Chinese regions show the VIDE model achieves 91.51% precision, 92.63% recall, and 92.07% F1-score, outperforming state-of-the-art models. The study offers a practical solution for geographical origin identification and advances intelligent quality assessment in Dendrobium officinale. Full article
Show Figures

Figure 1

8 pages, 1340 KB  
Proceeding Paper
Trans-Dimensional Diffusive Nested Sampling for Metabolic Network Inference
by Johann Fredrik Jadebeck, Wolfgang Wiechert and Katharina Nöh
Phys. Sci. Forum 2025, 12(1), 5; https://doi.org/10.3390/psf2025012005 - 24 Sep 2025
Viewed by 283
Abstract
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly [...] Read more.
Bayesian analysis is particularly useful for inferring models and their parameters given data. This is a common task in metabolic modeling, where models of varying complexity are used to interpret data. Nested sampling is a class of probabilistic inference algorithms that are particularly effective for estimating evidence and sampling the parameter posterior probability distributions. However, the practicality of nested sampling for metabolic network inference has yet to be studied. In this technical report, we explore the amalgamation of nested sampling, specifically diffusive nested sampling, with reversible jump Markov chain Monte Carlo. We apply the algorithm to two synthetic problems from the field of metabolic flux analysis. We present run times and share insights into hyperparameter choices, providing a useful point of reference for future applications of nested sampling to metabolic flux problems. Full article
Show Figures

Figure 1

40 pages, 1079 KB  
Article
Hierarchical Vector Mixtures for Electricity Day-Ahead Market Prices Scenario Generation
by Carlo Mari and Carlo Lucheroni
Mathematics 2025, 13(17), 2852; https://doi.org/10.3390/math13172852 - 4 Sep 2025
Viewed by 466
Abstract
In this paper, a class of fully probabilistic time series models based on Gaussian Vector Mixtures (VMs), i.e., on linear combinations of multivariate Gaussian distributions, is proposed to model electricity Day Ahead Market (DAM) hourly prices and to generate consistent related DAM prices [...] Read more.
In this paper, a class of fully probabilistic time series models based on Gaussian Vector Mixtures (VMs), i.e., on linear combinations of multivariate Gaussian distributions, is proposed to model electricity Day Ahead Market (DAM) hourly prices and to generate consistent related DAM prices dynamic scenarios. These models, based on latent variables, intrinsically allow for organizing DAM data in hierarchically organized clusters, and for recreating the delicate balance of price spikes and baseline price dynamics present in the DAM data. The latent variables and the parameters of these models have a simple and clear interpretation in terms of market phenomenology, like market conditions, spikes and night/day seasonality. In the machine learning community, different to current deep learning models, VMs and the other members of the class discussed in the paper could be seen as just ‘oldish’ probabilistic models. In this paper it is shown, on the contrary, that they are still worthy models, excellent at extracting relevant features from data, and directly interpretable as a subset of the regime switching autoregressions still currently largely used in the econometric community. In addition, it is shown how they can include mixtures of mixtures, thus allowing for the unsupervised detection of hierarchical structures in the data. It is also pointed out that, as such, VMs cannot fully accommodate the autocorrelation information intrinsic to DAM data time series, hence extensions of VMs are needed. The paper is thus divided into two parts. In the first part, VMs are estimated and used to model daily vector sequences of 24 prices, thus assessing their scenario generation capability. In this part, it is shown that VMs can very well preserve and encode infra-day dynamic structure like autocorrelation up to 24 lags, but also that they cannot handle inter-day structure. In the second part, these mixtures are dynamically extended to incorporate dynamic features typical of hidden Markov models, thus becoming Vector Hidden Markov Mixtures (VHMMs) of Gaussian distributions, endowed with daily latent dynamics. VHMMs are thus shown to be very much able to model both infra-day and inter-day phenomenology, hence able to include autocorrelation beyond 24 lags. Building on the VM discussion on latent variables and mixtures of mixtures, these models are also shown to possess enough internal structure to exploit and carry forward hierarchical clustering also in their dynamics, their small number of parameters still preserving a simple and clear interpretation in terms of market phenomenology and in terms of standard econometrics. All these properties are thus also available to their regime switching counterparts from econometrics. In practice, these very simple models, bridging machine learning and econometrics, are able to learn latent price regimes from historical data in an unsupervised fashion, enabling the generation of realistic market scenarios while maintaining straightforward econometrics-like explainability. Full article
Show Figures

Figure 1

33 pages, 5703 KB  
Article
Evaluating Sampling Strategies for Characterizing Energy Demand in Regions of Colombia Without AMI Infrastructure
by Oscar Alberto Bustos, Julián David Osorio, Javier Rosero-García, Cristian Camilo Marín-Cano and Luis Alirio Bolaños
Appl. Sci. 2025, 15(17), 9588; https://doi.org/10.3390/app15179588 - 30 Aug 2025
Viewed by 606
Abstract
This study presents and evaluates three sampling strategies to characterize electricity demand in regions of Colombia with limited metering infrastructure. These areas lack Advanced Metering Infrastructure (AMI), relying instead on traditional monthly consumption records. The objective of the research is to obtain user [...] Read more.
This study presents and evaluates three sampling strategies to characterize electricity demand in regions of Colombia with limited metering infrastructure. These areas lack Advanced Metering Infrastructure (AMI), relying instead on traditional monthly consumption records. The objective of the research is to obtain user samples that are representative of the original population and logistically efficient, in order to support energy planning and decision-making. The analysis draws on five years of historical data from 2020 to 2024. It includes monthly energy consumption, geographic coordinates, customer classification, and population type, covering over 500,000 users across four subregions of operation determined by the region grid operator: North, South, Center, and East. The proposed methodologies are based on Shannon entropy, consumption-based probabilistic sampling, and Kullback–Leibler divergence minimization. Each method is assessed for its ability to capture demand variability, ensure representativeness, and optimize field deployment. Representativeness is evaluated by comparing the differences in class proportions between the sample and the original population, complemented by the Pearson correlation coefficient between their distributions. Results indicate that entropy-based sampling excels in logistical simplicity and preserves categorical diversity, while KL divergence offers the best statistical fit to population characteristics. The findings demonstrate how combining information theory and statistical optimization enables flexible, scalable sampling solutions for demand characterization in under-instrumented electricity grids. Full article
Show Figures

Figure 1

16 pages, 662 KB  
Article
Augmenting Naïve Bayes Classifiers with k-Tree Topology
by Fereshteh R. Dastjerdi and Liming Cai
Mathematics 2025, 13(13), 2185; https://doi.org/10.3390/math13132185 - 4 Jul 2025
Viewed by 536
Abstract
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential [...] Read more.
The Bayesian network is a directed, acyclic graphical model that can offer a structured description for probabilistic dependencies among random variables. As powerful tools for classification tasks, Bayesian classifiers often require computing joint probability distributions, which can be computationally intractable due to potential full dependencies among feature variables. On the other hand, Naïve Bayes, which presumes zero dependencies among features, trades accuracy for efficiency and often comes with underperformance. As a result, non-zero dependency structures, such as trees, are often used as more feasible probabilistic graph approximations; in particular, Tree Augmented Naïve Bayes (TAN) has been demonstrated to outperform Naïve Bayes and has become a popular choice. For applications where a variable is strongly influenced by multiple other features, TAN has been further extended to the k-dependency Bayesian classifier (KDB), where one feature can depend on up to k other features (for a given k2). In such cases, however, the selection of the k parent features for each variable is often made through heuristic search methods (such as sorting), which do not guarantee an optimal approximation of network topology. In this paper, the novel notion of k-tree Augmented Naïve Bayes (k-TAN) is introduced to augment Naïve Bayesian classifiers with k-tree topology as an approximation of Bayesian networks. It is proved that, under the Kullback–Leibler divergence measurement, k-tree topology approximation of Bayesian classifiers loses the minimum information with the topology of a maximum spanning k-tree, where the edge weights of the graph are mutual information between random variables conditional upon the class label. In addition, while in general finding a maximum spanning k-tree is NP-hard for fixed k2, this work shows that the approximation problem can be solved in time O(nk+1) if the spanning k-tree also desires to retain a given Hamiltonian path in the graph. Therefore, this algorithm can be employed to ensure efficient approximation of Bayesian networks with k-tree augmented Naïve Bayesian classifiers of the guaranteed minimum loss of information. Full article
Show Figures

Figure 1

33 pages, 12918 KB  
Article
Time-Dependent Fragility Functions and Post-Earthquake Residual Seismic Performance for Existing Steel Frame Columns in Offshore Atmospheric Environment
by Xiaohui Zhang, Xuran Zhao, Shansuo Zheng and Qian Yang
Buildings 2025, 15(13), 2330; https://doi.org/10.3390/buildings15132330 - 2 Jul 2025
Viewed by 651
Abstract
This paper evaluates the time-dependent fragility and post-earthquake residual seismic performance of existing steel frame columns in offshore atmospheric environments. Based on experimental research, the seismic failure mechanism and deterioration laws of the seismic behavior of corroded steel frame columns were revealed. A [...] Read more.
This paper evaluates the time-dependent fragility and post-earthquake residual seismic performance of existing steel frame columns in offshore atmospheric environments. Based on experimental research, the seismic failure mechanism and deterioration laws of the seismic behavior of corroded steel frame columns were revealed. A finite element analysis (FEA) method for steel frame columns, which considers corrosion damage and ductile metal damage criteria, is developed and validated. A parametric analysis in terms of service age and design parameters is conducted. Considering the impact of environmental erosion and aging, a classification criterion for damage states for existing steel frame columns is proposed, and the theoretical characterization of each damage state is provided based on the moment-rotation skeleton curves. Based on the test and numerical analysis results, probability distributions of the fragility function parameters (median and logarithmic standard deviation) are constructed. The evolution laws of the fragility parameters with increasing service age under each damage state are determined, and a time-dependent fragility model for existing steel frame columns in offshore atmospheric environments is presented through regression analysis. At a drift ratio of 4%, the probability of complete damage to columns with 40, 50, 60, and 70-year service ages increased by 18.1%, 45.3%, 79.2%, and 124.5%, respectively, compared with columns within a 30-year service age. Based on the developed FEA models and the damage class of existing columns, the influence of characteristic variables (service age, design parameters, and damage level) on the residual seismic capacity of earthquake-damaged columns, namely the seismic resistance that can be maintained even after suffering earthquake damage, is revealed. Using the particle swarm optimization back-propagation neural network (PSO-BPNN) model, nonlinear mapping relationships between the characteristic variables and residual seismic capacity are constructed, thereby proposing a residual seismic performance evaluation model for existing multi-aged steel frame columns in an offshore atmospheric environment. Combined with the damage probability matrix of the time-dependent fragility, the expected values of the residual seismic capacity of existing multi-aged steel frame columns at a given drift ratio are obtained directly in a probabilistic sense. The results of this study lay the foundation for resistance to sequential earthquakes and post-earthquake functional recovery and reconstruction, and provide theoretical support for the full life-cycle seismic resilience assessment of existing steel structures in earthquake-prone areas. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

33 pages, 5060 KB  
Article
The Extreme Value Support Measure Machine for Group Anomaly Detection
by Lixuan An, Bernard De Baets and Stijn Luca
Mathematics 2025, 13(11), 1813; https://doi.org/10.3390/math13111813 - 29 May 2025
Viewed by 971
Abstract
Group anomaly detection is a subfield of pattern recognition that aims at detecting anomalous groups rather than individual anomalous points. However, existing approaches mainly target the unusual aggregate of points in high-density regions. In this way, unusual group behavior with a number of [...] Read more.
Group anomaly detection is a subfield of pattern recognition that aims at detecting anomalous groups rather than individual anomalous points. However, existing approaches mainly target the unusual aggregate of points in high-density regions. In this way, unusual group behavior with a number of points located in low-density regions is not fully detected. In this paper, we propose a systematic approach based on extreme value theory (EVT), a field of statistics adept at modeling the tails of a distribution where data are sparse, and one-class support measure machines (OCSMMs) to quantify anomalous group behavior comprehensively. First, by applying EVT to a point process model, we construct an analytical model describing the likelihood of an aggregate within a group with respect to low-density regions, aimed at capturing anomalous group behavior in such regions. This model is then combined with a calibrated OCSMM, which provides probabilistic outputs to characterize anomalous group behavior in high-density regions, enabling improved assessment of overall anomalous group behavior. Extensive experiments on simulated and real-world data demonstrate that our method outperforms existing group anomaly detectors across diverse scenarios, showing its effectiveness in quantifying and interpreting various types of anomalous group behavior. Full article
(This article belongs to the Section D1: Probability and Statistics)
Show Figures

Figure 1

18 pages, 4639 KB  
Article
Using Hybrid Feature and Classifier Fusion for an Asynchronous Brain–Computer Interface Framework Based on Steady-State Motion Visual Evoked Potentials
by Bo Hu, Jun Xie, Huanqing Zhang, Junjie Liu and Hu Wang
Appl. Sci. 2025, 15(11), 6010; https://doi.org/10.3390/app15116010 - 27 May 2025
Viewed by 600
Abstract
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis [...] Read more.
This study proposes an asynchronous brain–computer interface (BCI) framework based on steady-state motion visual evoked potentials (SSMVEPs), designed to enhance the accuracy and robustness of control state recognition. The method integrates filter bank common spatial patterns (FBCSPs) and filter bank canonical correlation analysis (FBCCA) to extract complementary spatial and frequency domain features from EEG signals. These multimodal features are then fused and input into a dual-classifier structure consisting of a support vector machine (SVM) and extreme gradient boosting (XGBoost). A weighted fusion strategy is applied to combine the probabilistic outputs of both classifiers, allowing the system to leverage their respective strengths. Experimental results demonstrate that the fused FB(CSP + CCA)-(SVM + XGBoost) model achieves superior performance in distinguishing intentional control (IC) and non-control (NC) states compared to models using a single feature type or classifier. Furthermore, the visualization of feature distributions using UMAP shows improved inter-class separability when combining FBCSP and FBCCA features. These findings confirm the effectiveness of both feature-level and classifier-level fusion in asynchronous BCI systems. The proposed approach offers a promising and practical solution for developing more reliable and user-adaptive BCI applications, particularly in real-world environments requiring flexible control without external cues. Full article
Show Figures

Figure 1

16 pages, 297 KB  
Article
On the t-Transformation of Free Convolution
by Shokrya S. Alshqaq, Ohud A. Alqasem and Raouf Fakhfakh
Mathematics 2025, 13(10), 1651; https://doi.org/10.3390/math13101651 - 18 May 2025
Viewed by 363
Abstract
The study of the stability of measure families under measure transformations, as well as the accompanying limit theorems, is motivated by both fundamental and applied probability theory and dynamical systems. Stability analysis tries to uncover invariant or quasi-invariant measures that describe the long-term [...] Read more.
The study of the stability of measure families under measure transformations, as well as the accompanying limit theorems, is motivated by both fundamental and applied probability theory and dynamical systems. Stability analysis tries to uncover invariant or quasi-invariant measures that describe the long-term behavior of stochastic or deterministic systems. Limit theorems, on the other hand, characterize the asymptotic distributional behavior of successively changed measures, which frequently indicate convergence to fixed points or attractors. Together, these studies advance our knowledge of measure development, aid in the categorization of dynamical behavior, and give tools for modeling complicated systems in mathematics and applied sciences. In this paper, the notion of the t-transformation of a measure and convolution is studied from the perspective of families and their relative variance functions (VFs). Using analytical and algebraic approaches, we aim to develop a deeper understanding of how the t-transformation shapes the behavior of probability measures, with possible implications in current probabilistic models. Based on the VF concept, we show that the free Meixner family (FMF) of probability measures (the free equivalent of the Letac Mora class) remains invariant when t-transformation is applied. We also use the VFs to show some new limiting theorems concerning t-deformed free convolution and the combination of free and Boolean additive convolution. Full article
(This article belongs to the Section D1: Probability and Statistics)
29 pages, 35856 KB  
Article
Symbol Recognition Method for Railway Catenary Layout Drawings Based on Deep Learning
by Qi Sun, Mengxin Zhu, Minzhi Li, Gaoju Li and Weizhi Deng
Symmetry 2025, 17(5), 674; https://doi.org/10.3390/sym17050674 - 28 Apr 2025
Viewed by 1447
Abstract
Railway catenary layout drawings (RCLDs) have the characteristics of upper and lower symmetry, a large drawing size, a small size, high similarity among target symbols, and an uneven distribution of symbol categories. These factors make the symbol detection task more complex and challenging. [...] Read more.
Railway catenary layout drawings (RCLDs) have the characteristics of upper and lower symmetry, a large drawing size, a small size, high similarity among target symbols, and an uneven distribution of symbol categories. These factors make the symbol detection task more complex and challenging. To address the aforementioned challenges, this paper proposes three enhancements to YOLOv8n to improve symbol detection performance and integrates an improved denoising diffusion probabilistic model (IDDPM) to mitigate the imbalance in symbol category distribution. First, the multi-scale dilated attention (MSDA) is introduced in the Neck part to enhance the model’s perception of the global context in complex RCLD scenes, so that it can more effectively capture the symbol information distributed in different scales and backgrounds. Secondly, the receptive field attention convolution (RFAConv) is used in the detection head to replace the standard convolution, to improve the ability to focus on the target symbols in RCLDs and effectively alleviate the occlusion interference between symbols. Finally, the dynamic upsampler (DySample) is used to enhance the clarity and positioning accuracy of the edge area of small target symbols in RCLDs and enhance the detection of small targets. The above design made targeted optimizations to resolve the problems of symbol and background interference, character overlap, and symbol category imbalances in complex scenes in RCLDs, effectively improving the overall detection performance of the model. Compared with the baseline YOLOv8n model, the improved YOLOv8n achieves increases of 2.9% in F1, 1.9% in mAP@0.5, and 1.7% in mAP@0.5:0.95. With the introduction of synthetic data, the recognition of minority-class symbols is further enhanced, leading to additional gains of 4%, 3.8%, and 14% in F1, mAP@0.5, and mAP@0.5:0.95, respectively. These results demonstrate the effectiveness and superiority of the proposed method in improving detection performance. Full article
Show Figures

Figure 1

17 pages, 508 KB  
Article
Probabilistic Analysis of Distributed Fractional-Order Stochastic Systems Driven by Fractional Brownian Motion: Existence, Uniqueness, and Transportation Inequalities
by Guangyue Xia, Liping Xu and Zhi Li
Symmetry 2025, 17(5), 650; https://doi.org/10.3390/sym17050650 - 25 Apr 2025
Viewed by 477
Abstract
This paper investigates a class of distributed fractional-order stochastic differential equations driven by fractional Brownian motion with a Hurst parameter 1/2<H<1. By employing the Picard iteration method, we rigorously prove the existence and uniqueness of solutions [...] Read more.
This paper investigates a class of distributed fractional-order stochastic differential equations driven by fractional Brownian motion with a Hurst parameter 1/2<H<1. By employing the Picard iteration method, we rigorously prove the existence and uniqueness of solutions with Lipschitz conditions. Furthermore, leveraging the Girsanov transformation argument within the L2 metric framework, we derive quadratic transportation inequalities for the law of the strong solution to the considered equations. These results provide a deeper understanding of the regularity and probabilistic properties of the solutions in this framework. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

17 pages, 24593 KB  
Article
Enhanced PolSAR Image Segmentation with Polarization Channel Fusion and Diffusion-Based Probability Modeling
by Hao Chen, Yuzhuo Hou, Xiaoxiao Fang and Chu He
Electronics 2025, 14(4), 791; https://doi.org/10.3390/electronics14040791 - 18 Feb 2025
Viewed by 757
Abstract
With the advancement of polarimetric synthetic aperture radar (PolSAR) imaging technology and the growing demand for image interpretation, extracting meaningful land cover information from PolSAR images has become a key research focus. To address the segmentation challenge, we propose an innovative method. First, [...] Read more.
With the advancement of polarimetric synthetic aperture radar (PolSAR) imaging technology and the growing demand for image interpretation, extracting meaningful land cover information from PolSAR images has become a key research focus. To address the segmentation challenge, we propose an innovative method. First, features from co-polarization and cross-polarization channels are separately used as dual inputs, and a cross-attention mechanism effectively fuses these features to capture correlations between different polarization information. Second, a diffusion framework is employed to jointly model target features and class probabilities, aiming to improve segmentation accuracy by learning and fitting the probabilistic distribution of target labels. Finally, experimental results demonstrate that the proposed method achieves superior performance in PolSAR image segmentation, effectively managing complex polarization relationships while offering robustness and broad application potential. Full article
Show Figures

Figure 1

Back to TopTop