Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (565)

Search Parameters:
Keywords = non-Gaussian distributions

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 622 KiB  
Article
Distributed Diffusion Multi-Distribution Filter with IMM for Heavy-Tailed Noise
by Guannan Chang, Changwu Jiang, Wenxing Fu, Tao Cui and Peng Dong
Signals 2025, 6(3), 37; https://doi.org/10.3390/signals6030037 (registering DOI) - 1 Aug 2025
Viewed by 52
Abstract
With the diversification of space applications, the tracking of maneuvering targets has gradually gained attention. Issues such as their wide range of movement and observation outliers caused by human operation are worthy of in-depth discussion. This paper presents a novel distributed diffusion multi-noise [...] Read more.
With the diversification of space applications, the tracking of maneuvering targets has gradually gained attention. Issues such as their wide range of movement and observation outliers caused by human operation are worthy of in-depth discussion. This paper presents a novel distributed diffusion multi-noise Interacting Multiple Model (IMM) filter for maneuvering target tracking in heavy-tailed noise. The proposed approach leverages parallel Gaussian and Student-t filters to enhance robustness against non-Gaussian process and measurement noise. This hybrid filter is implemented as a node within a distributed network, where the diffusion algorithm leads to the global state asymptotically reaching consensus as the filtering time progresses. Furthermore, a fusion of multiple motion models within the IMM algorithm enables robust tracking of maneuvering targets across the distributed network and process outlier caused by maneuver compared to previous studies. Simulation results demonstrate the effectiveness of the proposed filter in tracking maneuvering targets. Full article
Show Figures

Figure 1

14 pages, 10176 KiB  
Article
Recrystallization During Annealing of Low-Density Polyethylene Non-Woven Fabric by Melt Electrospinning
by Yueming Ren, Changjin Li, Minqiao Ren, Dali Gao, Yujing Tang, Changjiang Wu, Liqiu Chu, Qi Zhang and Shijun Zhang
Polymers 2025, 17(15), 2121; https://doi.org/10.3390/polym17152121 - 31 Jul 2025
Viewed by 170
Abstract
The effect of annealing on the microstructure and tensile properties of low-density polyethylene (LDPE) non-woven fabric produced by melt electrospinning was systematically investigated using DSC, SAXS, SEM, etc. The results showed that, above an annealing temperature of 80 °C, both the [...] Read more.
The effect of annealing on the microstructure and tensile properties of low-density polyethylene (LDPE) non-woven fabric produced by melt electrospinning was systematically investigated using DSC, SAXS, SEM, etc. The results showed that, above an annealing temperature of 80 °C, both the main melting point and crystallinity of LDPE decreased compared to the original sample, as did the tensile strength of the non-woven fabric. Additionally, the lamellar distribution became broader at annealing temperatures above 80 °C. The recrystallization mechanism of molten lamellae (disordered chains) in LDPE was elucidated by fitting the data using a Gaussian function. It was found that secondary crystallization, forming thicker lamellae, and spontaneous crystallization, forming thinner lamellae, occurred simultaneously at rates dependent on the annealing temperature. Secondary crystallization dominated at temperatures ≤80 °C, whereas spontaneous crystallization prevailed at temperatures above 80 °C. These findings explain the observed changes in the microstructure and tensile properties of the LDPE non-woven fabric. Furthermore, a physical model describing the microstructural evolution of the LDPE non-woven fabric during annealing was proposed based on the experimental evidence. Full article
(This article belongs to the Section Polymer Analysis and Characterization)
Show Figures

Figure 1

22 pages, 5346 KiB  
Article
Numerical Study of Stud Welding Temperature Fields on Steel–Concrete Composite Bridges
by Sicong Wei, Han Su, Xu Han, Heyuan Zhou and Sen Liu
Materials 2025, 18(15), 3491; https://doi.org/10.3390/ma18153491 - 25 Jul 2025
Viewed by 316
Abstract
Non-uniform temperature fields are developed during the welding of studs in steel–concrete composite bridges. Due to uneven thermal expansion and reversible solid-state phase transformations between ferrite/martensite and austenite structures within the materials, residual stresses are induced, which ultimately degrades the mechanical performance of [...] Read more.
Non-uniform temperature fields are developed during the welding of studs in steel–concrete composite bridges. Due to uneven thermal expansion and reversible solid-state phase transformations between ferrite/martensite and austenite structures within the materials, residual stresses are induced, which ultimately degrades the mechanical performance of the structure. For a better understanding of the influence on steel–concrete composite bridges’ structural behavior by residual stress, accurate simulation of the spatio-temporal temperature distribution during stud welding under practical engineering conditions is critical. This study introduces a precise simulation method for temperature evolution during stud welding, in which the Gaussian heat source model was applied. The simulated results were validated by real welding temperature fields measured by the infrared thermography technique. The maximum error between the measured and simulated peak temperatures was 5%, demonstrating good agreement between the measured and simulated temperature distributions. Sensitivity analyses on input current and plate thickness were conducted. The results showed a positive correlation between peak temperature and input current. With lower input current, flatter temperature gradients were observed in both the transverse and thickness directions of the steel plate. Additionally, plate thickness exhibited minimal influence on radial peak temperature, with a maximum observed difference of 130 °C. However, its effect on peak temperature in the thickness direction was significant, yielding a maximum difference of approximately 1000 °C. The thermal influence of group studs was also investigated in this study. The results demonstrated that welding a new stud adjacent to existing ones introduced only minor disturbances to the established temperature field. The maximum peak temperature difference before and after welding was approximately 100 °C. Full article
(This article belongs to the Section Construction and Building Materials)
Show Figures

Figure 1

18 pages, 1412 KiB  
Article
Graph-Regularized Orthogonal Non-Negative Matrix Factorization with Itakura–Saito (IS) Divergence for Fault Detection
by Yabing Liu, Juncheng Wu, Jin Zhang and Man-Fai Leung
Mathematics 2025, 13(15), 2343; https://doi.org/10.3390/math13152343 - 23 Jul 2025
Viewed by 170
Abstract
In modern industrial environments, quickly and accurately identifying faults is crucial for ensuring the smooth operation of production processes. Non-negative Matrix Factorization (NMF)-based fault detection technology has garnered attention due to its wide application in industrial process monitoring and machinery fault diagnosis. As [...] Read more.
In modern industrial environments, quickly and accurately identifying faults is crucial for ensuring the smooth operation of production processes. Non-negative Matrix Factorization (NMF)-based fault detection technology has garnered attention due to its wide application in industrial process monitoring and machinery fault diagnosis. As an effective dimensionality reduction tool, NMF can decompose complex datasets into non-negative matrices with practical and physical significance, thereby extracting key features of the process. This paper presents a novel approach to fault detection in industrial processes, called Graph-Regularized Orthogonal Non-negative Matrix Factorization with Itakura–Saito Divergence (GONMF-IS). The proposed method addresses the challenges of fault detection in complex, non-Gaussian industrial environments. By using Itakura–Saito divergence, GONMF-IS effectively handles data with probabilistic distribution characteristics, improving the model’s ability to process non-Gaussian data. Additionally, graph regularization leverages the structural relationships among data points to refine the matrix factorization process, enhancing the robustness and adaptability of the algorithm. The incorporation of orthogonality constraints further enhances the independence and interpretability of the resulting factors. Through extensive experiments, the GONMF-IS method demonstrates superior performance in fault detection tasks, providing an effective and reliable tool for industrial applications. The results suggest that GONMF-IS offers significant improvements over traditional methods, offering a more robust and accurate solution for fault diagnosis in complex industrial settings. Full article
Show Figures

Figure 1

12 pages, 493 KiB  
Article
Exploring Non-Gaussianity Reduction in Quantum Channels
by Micael Andrade Dias and Francisco Marcos de Assis
Entropy 2025, 27(7), 768; https://doi.org/10.3390/e27070768 - 20 Jul 2025
Viewed by 227
Abstract
The quantum relative entropy between a quantum state and its Gaussian equivalent is a quantifying function of the system’s non-Gaussianity, a useful resource in several applications, such as quantum communication and computation. One of its most fundamental properties is to be monotonically decreasing [...] Read more.
The quantum relative entropy between a quantum state and its Gaussian equivalent is a quantifying function of the system’s non-Gaussianity, a useful resource in several applications, such as quantum communication and computation. One of its most fundamental properties is to be monotonically decreasing under Gaussian evolutions. In this paper, we develop the conditions for a non-Gaussian quantum channel to preserve the monotonically decreasing property. We propose a necessary condition to classify between Gaussian and non-Gaussian channels and use it to define a class of quantum channels that decrease the system’s non-Gaussianity. We also discuss how this property, combined with a restriction on the states at the channel’s input, can be applied to the security analysis of continuous-variable quantum key distribution protocols. Full article
Show Figures

Figure 1

20 pages, 3787 KiB  
Article
Enhancing Robustness of Variational Data Assimilation in Chaotic Systems: An α-4DVar Framework with Rényi Entropy and α-Generalized Gaussian Distributions
by Yuchen Luo, Xiaoqun Cao, Kecheng Peng, Mengge Zhou and Yanan Guo
Entropy 2025, 27(7), 763; https://doi.org/10.3390/e27070763 - 18 Jul 2025
Viewed by 227
Abstract
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation [...] Read more.
Traditional 4-dimensional variational data assimilation methods have limitations due to the Gaussian distribution assumption of observation errors, and the gradient of the objective functional is vulnerable to observation noise and outliers. To address these issues, this paper proposes a non-Gaussian nonlinear data assimilation method called α-4DVar, based on Rényi entropy and the α-generalized Gaussian distribution. By incorporating the heavy-tailed property of Rényi entropy, the objective function and its gradient suitable for non-Gaussian errors are derived, and numerical experiments are conducted using the Lorenz-63 model. Experiments are conducted with Gaussian and non-Gaussian errors as well as different initial guesses to compare the assimilation effects of traditional 4DVar and α-4DVar. The results show that α-4DVar performs as well as traditional method without observational errors. Its analysis field is closer to the truth, with RMSE rapidly dropping to a low level and remaining stable, particularly under non-Gaussian errors. Under different initial guesses, the RMSE of both the background and analysis fields decreases quickly and stabilizes. In conclusion, the α-4DVar method demonstrates significant advantages in handling non-Gaussian observational errors, robustness against noise, and adaptability to various observational conditions, thus offering a more reliable and effective solution for data assimilation. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

14 pages, 3176 KiB  
Article
Impact of Data Distribution and Bootstrap Setting on Anomaly Detection Using Isolation Forest in Process Quality Control
by Hyunyul Choi and Kihyo Jung
Entropy 2025, 27(7), 761; https://doi.org/10.3390/e27070761 - 18 Jul 2025
Viewed by 296
Abstract
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions [...] Read more.
This study investigates the impact of data distribution and bootstrap resampling on the anomaly detection performance of the Isolation Forest (iForest) algorithm in statistical process control. Although iForest has received attention for its multivariate and ensemble-based nature, its performance under non-normal data distributions and varying bootstrap settings remains underexplored. To address this gap, a comprehensive simulation was performed across 18 scenarios involving log-normal, gamma, and t-distributions with different mean shift levels and bootstrap configurations. The results show that iForest substantially outperforms the conventional Hotelling’s T2 control chart, especially in non-Gaussian settings and under small-to-medium process shifts. Enabling bootstrap resampling led to marginal improvements across classification metrics, including accuracy, precision, recall, F1-score, and average run length (ARL)1. However, a key limitation of iForest was its reduced sensitivity to subtle process changes, such as a 1σ mean shift, highlighting an area for future enhancement. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

25 pages, 434 KiB  
Article
The Impact of Digitalization on Carbon Emission Efficiency: An Intrinsic Gaussian Process Regression Approach
by Yongtong Hu, Jiaqi Xu and Tao Liu
Sustainability 2025, 17(14), 6551; https://doi.org/10.3390/su17146551 - 17 Jul 2025
Viewed by 286
Abstract
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process [...] Read more.
This study introduces an intrinsic Gaussian Process Regression (iGPR) model for the first time, which incorporates non-Euclidean spatial covariates via a Gaussian process prior to analyzing the relationship between digitalization and carbon emission efficiency. The iGPR model’s hierarchical design embeds a Gaussian process as a flexible spatial random effect with a heat-kernel-based covariance function to capture the manifold geometry of spatial features. To enable tractable inference, we employ a penalized maximum-likelihood estimation (PMLE) approach to jointly estimate regression coefficients and covariance hyperparameters. Using a panel dataset linking a national digitalization (modernization) index to carbon emission efficiency, the empirical analysis demonstrates that digitalization has a significantly positive impact on carbon emission efficiency while accounting for spatial heterogeneity. The iGPR model also exhibits superior predictive accuracy compared to state-of-the-art machine learning methods (including XGBoost, random forest, support vector regression, ElasticNet, and a standard Gaussian process regression), achieving the lowest mean squared error (MSE = 0.0047) and an average prediction error near zero. Robustness checks include instrumental-variable GMM estimation to address potential endogeneity across the efficiency distribution and confirm the stability of the estimated positive effect of digitalization. Full article
Show Figures

Figure 1

23 pages, 1755 KiB  
Article
An Efficient Continuous-Variable Quantum Key Distribution with Parameter Optimization Using Elitist Elk Herd Random Immigrants Optimizer and Adaptive Depthwise Separable Convolutional Neural Network
by Vidhya Prakash Rajendran, Deepalakshmi Perumalsamy, Chinnasamy Ponnusamy and Ezhil Kalaimannan
Future Internet 2025, 17(7), 307; https://doi.org/10.3390/fi17070307 - 17 Jul 2025
Viewed by 298
Abstract
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key [...] Read more.
Quantum memory is essential for the prolonged storage and retrieval of quantum information. Nevertheless, no current studies have focused on the creation of effective quantum memory for continuous variables while accounting for the decoherence rate. This work presents an effective continuous-variable quantum key distribution method with parameter optimization utilizing the Elitist Elk Herd Random Immigrants Optimizer (2E-HRIO) technique. At the outset of transmission, the quantum device undergoes initialization and authentication via Compressed Hash-based Message Authentication Code with Encoded Post-Quantum Hash (CHMAC-EPQH). The settings are subsequently optimized from the authenticated device via 2E-HRIO, which mitigates the effects of decoherence by adaptively tuning system parameters. Subsequently, quantum bits are produced from the verified device, and pilot insertion is executed within the quantum bits. The pilot-inserted signal is thereafter subjected to pulse shaping using a Gaussian filter. The pulse-shaped signal undergoes modulation. Authenticated post-modulation, the prediction of link failure is conducted through an authenticated channel using Radial Density-Based Spatial Clustering of Applications with Noise. Subsequently, transmission occurs via a non-failure connection. The receiver performs channel equalization on the received signal with Recursive Regularized Least Mean Squares. Subsequently, a dataset for side-channel attack authentication is gathered and preprocessed, followed by feature extraction and classification using Adaptive Depthwise Separable Convolutional Neural Networks (ADS-CNNs), which enhances security against side-channel attacks. The quantum state is evaluated based on the signal received, and raw data are collected. Thereafter, a connection is established between the transmitter and receiver. Both the transmitter and receiver perform the scanning process. Thereafter, the calculation and correction of the error rate are performed based on the sifting results. Ultimately, privacy amplification and key authentication are performed using the repaired key via B-CHMAC-EPQH. The proposed system demonstrated improved resistance to decoherence and side-channel attacks, while achieving a reconciliation efficiency above 90% and increased key generation rate. Full article
Show Figures

Graphical abstract

23 pages, 6440 KiB  
Article
A Gravity Data Denoising Method Based on Multi-Scale Attention Mechanism and Physical Constraints Using U-Net
by Bing Liu, Houpu Li, Shaofeng Bian, Chaoliang Zhang, Bing Ji and Yujie Zhang
Appl. Sci. 2025, 15(14), 7956; https://doi.org/10.3390/app15147956 - 17 Jul 2025
Viewed by 262
Abstract
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this [...] Read more.
Gravity and gravity gradient data serve as fundamental inputs for geophysical resource exploration and geological structure analysis. However, traditional denoising methods—including wavelet transforms, moving averages, and low-pass filtering—exhibit signal loss and limited adaptability under complex, non-stationary noise conditions. To address these challenges, this study proposes an improved U-Net deep learning framework that integrates multi-scale feature extraction and attention mechanisms. Furthermore, a Laplace consistency constraint is introduced into the loss function to enhance denoising performance and physical interpretability. Notably, the datasets used in this study are generated by the authors, involving simulations of subsurface prism distributions with realistic density perturbations (±20% of typical rock densities) and the addition of controlled Gaussian noise (5%, 10%, 15%, and 30%) to simulate field-like conditions, ensuring the diversity and physical relevance of training samples. Experimental validation on these synthetic datasets and real field datasets demonstrates the superiority of the proposed method over conventional techniques. For noise levels of 5%, 10%, 15%, and 30% in test sets, the improved U-Net achieves Peak Signal-to-Noise Ratios (PSNR) of 59.13 dB, 52.03 dB, 48.62 dB, and 48.81 dB, respectively, outperforming wavelet transforms, moving averages, and low-pass filtering by 10–30 dB. In multi-component gravity gradient denoising, our method excels in detail preservation and noise suppression, improving Structural Similarity Index (SSIM) by 15–25%. Field data tests further confirm enhanced identification of key geological anomalies and overall data quality improvement. In summary, the improved U-Net not only delivers quantitative advancements in gravity data denoising but also provides a novel approach for high-precision geophysical data preprocessing. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Earth Sciences—2nd Edition)
Show Figures

Figure 1

18 pages, 9981 KiB  
Article
Toward Adaptive Unsupervised and Blind Image Forgery Localization with ViT-VAE and a Gaussian Mixture Model
by Haichang Yin, KinTak U, Jing Wang and Wuyue Ma
Mathematics 2025, 13(14), 2285; https://doi.org/10.3390/math13142285 - 16 Jul 2025
Viewed by 222
Abstract
Most image forgery localization methods rely on supervised learning, requiring large labeled datasets for training. Recently, several unsupervised approaches based on the variational autoencoder (VAE) framework have been proposed for forged pixel detection. In these approaches, the latent space is built by a [...] Read more.
Most image forgery localization methods rely on supervised learning, requiring large labeled datasets for training. Recently, several unsupervised approaches based on the variational autoencoder (VAE) framework have been proposed for forged pixel detection. In these approaches, the latent space is built by a simple Gaussian distribution or a Gaussian Mixture Model. Despite their success, there are still some limitations: (1) A simple Gaussian distribution assumption in the latent space constrains performance due to the diverse distribution of forged images. (2) Gaussian Mixture Models (GMMs) introduce non-convex log-sum-exp functions in the Kullback–Leibler (KL) divergence term, leading to gradient instability and convergence issues during training. (3) Estimating GMM mixing coefficients typically involves either the expectation-maximization (EM) algorithm before VAE training or a multilayer perceptron (MLP), both of which increase computational complexity. To address these limitations, we propose the Deep ViT-VAE-GMM (DVVG) framework. First, we employ Jensen’s inequality to simplify the KL divergence computation, reducing gradient instability and improving training stability. Second, we introduce convolutional neural networks (CNNs) to adaptively estimate the mixing coefficients, enabling an end-to-end architecture while significantly lowering computational costs. Experimental results on benchmark datasets demonstrate that DVVG not only enhances VAE performance but also improves efficiency in modeling complex latent distributions. Our method effectively balances performance and computational feasibility, making it a practical solution for real-world image forgery localization. Full article
(This article belongs to the Special Issue Applied Mathematics in Data Science and High-Performance Computing)
Show Figures

Figure 1

23 pages, 3404 KiB  
Article
MST-AI: Skin Color Estimation in Skin Cancer Datasets
by Vahid Khalkhali, Hayan Lee, Joseph Nguyen, Sergio Zamora-Erazo, Camille Ragin, Abhishek Aphale, Alfonso Bellacosa, Ellis P. Monk and Saroj K. Biswas
J. Imaging 2025, 11(7), 235; https://doi.org/10.3390/jimaging11070235 - 13 Jul 2025
Viewed by 331
Abstract
The absence of skin color information in skin cancer datasets poses a significant challenge for accurate diagnosis using artificial intelligence models, particularly for non-white populations. In this paper, based on the Monk Skin Tone (MST) scale, which is less biased than the Fitzpatrick [...] Read more.
The absence of skin color information in skin cancer datasets poses a significant challenge for accurate diagnosis using artificial intelligence models, particularly for non-white populations. In this paper, based on the Monk Skin Tone (MST) scale, which is less biased than the Fitzpatrick scale, we propose MST-AI, a novel method for detecting skin color in images of large datasets, such as the International Skin Imaging Collaboration (ISIC) archive. The approach includes automatic frame, lesion removal, and lesion segmentation using convolutional neural networks, and modeling normal skin tones with a Variational Bayesian Gaussian Mixture Model (VB-GMM). The distribution of skin color predictions was compared with MST scale probability distribution functions (PDFs) using the Kullback-Leibler Divergence (KLD) metric. Validation against manual annotations and comparison with K-means clustering of image and skin mean RGBs demonstrated the superior performance of the MST-AI, with Kendall’s Tau, Spearman’s Rho, and Normalized Discounted Cumulative Gain (NDGC) of 0.68, 0.69, and 1.00, respectively. This research lays the groundwork for developing unbiased AI models for early skin cancer diagnosis by addressing skin color imbalances in large datasets. Full article
(This article belongs to the Section AI in Imaging)
Show Figures

Figure 1

19 pages, 3047 KiB  
Article
Identifying the Combined Impacts of Sensor Quantity and Location Distribution on Source Inversion Optimization
by Shushuai Mao, Jianlei Lang, Feng Hu, Xiaoqi Wang, Kai Wang, Guiqin Zhang, Feiyong Chen, Tian Chen and Shuiyuan Cheng
Atmosphere 2025, 16(7), 850; https://doi.org/10.3390/atmos16070850 - 12 Jul 2025
Viewed by 163
Abstract
Source inversion optimization using sensor observations is a key method for rapidly and accurately identifying unknown source parameters (source strength and location) in abrupt hazardous gas leaks. Sensor number and location distribution both play important roles in source inversion; however, their combined impacts [...] Read more.
Source inversion optimization using sensor observations is a key method for rapidly and accurately identifying unknown source parameters (source strength and location) in abrupt hazardous gas leaks. Sensor number and location distribution both play important roles in source inversion; however, their combined impacts on source inversion optimization remain poorly understood. In our study, the optimization inversion method is established based on the Gaussian plume model and the generation algorithm. A research strategy combining random sampling and coefficient of variation methods was proposed to simultaneously quantify their combined impacts in the case of a single emission source. The sensor layout impact difference was analyzed under varying atmospheric conditions (unstable, neutral, and stable) and source location information (known or unknown) using the Prairie Grass experiments. The results indicated that adding sensors improved the source strength estimation accuracy more when the source location was known than when it was unknown. The impacts of sensor location distribution were strongly negatively correlated (r ≤ −0.985) with the number of sensors across scenarios. For source strength estimation, the impacts of the sensor location distribution difference decreased non-linearly with more sensors for known locations but linearly for unknown ones. The impacts of sensor number and location distribution on source strength estimation were amplified under stable atmospheric conditions compared to unstable and neutral conditions. The minimum number of randomly scattered sensors required for stable source strength inversion accuracy was 11, 12, and 17 for known locations under unstable, neutral, and stable atmospheric conditions, respectively, and 24, 9, and 21 for unknown locations. The multi-layer arc distribution outperformed rectangular, single-layer arc, and downwind-axis distributions in source strength estimation. This study enhances the understanding of factors influencing source inversion optimization and provides valuable insights for optimizing sensor layouts. Full article
(This article belongs to the Section Air Pollution Control)
Show Figures

Figure 1

26 pages, 543 KiB  
Article
Bounds on the Excess Minimum Risk via Generalized Information Divergence Measures
by Ananya Omanwar, Fady Alajaji and Tamás Linder
Entropy 2025, 27(7), 727; https://doi.org/10.3390/e27070727 - 5 Jul 2025
Viewed by 234
Abstract
Given finite-dimensional random vectors Y, X, and Z that form a Markov chain in that order (YXZ), we derive the upper bounds on the excess minimum risk using generalized information divergence measures. Here, Y is [...] Read more.
Given finite-dimensional random vectors Y, X, and Z that form a Markov chain in that order (YXZ), we derive the upper bounds on the excess minimum risk using generalized information divergence measures. Here, Y is a target vector to be estimated from an observed feature vector X or its stochastically degraded version Z. The excess minimum risk is defined as the difference between the minimum expected loss in estimating Y from X and from Z. We present a family of bounds that generalize a prior bound based on mutual information, using the Rényi and α-Jensen–Shannon divergences, as well as Sibson’s mutual information. Our bounds are similar to recently developed bounds for the generalization error of learning algorithms. However, unlike these works, our bounds do not require the sub-Gaussian parameter to be constant, and therefore, apply to a broader class of joint distributions over Y, X, and Z. We also provide numerical examples under both constant and non-constant sub-Gaussianity assumptions, illustrating that our generalized divergence-based bounds can be tighter than the ones based on mutual information for certain regimes of the parameter α. Full article
(This article belongs to the Special Issue Information Theoretic Learning with Its Applications)
Show Figures

Figure 1

15 pages, 518 KiB  
Article
Non-Centered Chi Distributions as Models for Fair Assessment in Sports Performance
by Diego Puig Castro, Ana Coronado Ferrer, Juan Carlos Castro Palacio, Pedro Fernández de Córdoba, Nuria Ortigosa and Enrique A. Sánchez Pérez
Symmetry 2025, 17(7), 1039; https://doi.org/10.3390/sym17071039 - 2 Jul 2025
Viewed by 312
Abstract
Some stochastic phenomena that appear in real-world processes and satisfy some similar characteristics can be effectively modeled using functions based on variants of the chi distribution. In this paper, we extend the use of the uncentered chi distribution to the assessment of sports [...] Read more.
Some stochastic phenomena that appear in real-world processes and satisfy some similar characteristics can be effectively modeled using functions based on variants of the chi distribution. In this paper, we extend the use of the uncentered chi distribution to the assessment of sports performance, focusing on its ability to characterize the physical fitness of athletes. The generating functions, constructed from individual test data assumed to follow a Gaussian distribution, provide a basis for creating a fitness index. In addition, we propose a methodology to rank athletes based on their performance in specific physical tests. Drawing on parallels with thermodynamic systems, such as the behavior of particles in an ideal gas, we explore the suitability of the (non-centered) chi distribution for modeling sports data. Simulations and real examples are presented that demonstrate the robustness of this approach. Full article
Show Figures

Figure 1

Back to TopTop