Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (20)

Search Parameters:
Keywords = Hellinger information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 4638 KB  
Article
Symbolic Analysis of the Quality of Texts Translated into a Language Preserving Vowel Harmony
by Kazuya Hayata
Entropy 2025, 27(9), 984; https://doi.org/10.3390/e27090984 - 20 Sep 2025
Viewed by 729
Abstract
To date, the ordinal pattern-based method has been applied to problems in natural and social sciences. We report, for the first time to our knowledge, an attempt to apply this methodology to a topic in the humanities. Specifically, in an effort to investigate [...] Read more.
To date, the ordinal pattern-based method has been applied to problems in natural and social sciences. We report, for the first time to our knowledge, an attempt to apply this methodology to a topic in the humanities. Specifically, in an effort to investigate the applicability of the methodology in analyzing the quality of texts that are translated into a language preserving the so-called vowel harmony, computed results are presented for the metrics of divergence between the back-translated and the original texts. As a specific language we focus on Japanese, and as metrics the Hellinger distance as well as the chi-square statistic are employed. Here, the former is a typical information-theoretical measure that can be quantified in natural unit, nat for short, while the latter is useful for performing a non-parametric testing of a null hypothesis with a significance level. The methods are applied to three cases: a Japanese novel along with a translated version available, the Preamble to the Constitution of Japan, and seventeen translations of an opening paragraph of a famous American detective story, which include thirteen human and four machine translations using DeepL and Google Translate. Numerical results aptly show unexpectedly high scores of the machine translations, but it still might be too soon to speculate on their unconditional potentialities. Both our attempt and results are not only novel but are also expected to make a contribution toward an interdisciplinary study between physics and linguistics. Full article
(This article belongs to the Special Issue Ordinal Patterns-Based Tools and Their Applications)
Show Figures

Figure 1

16 pages, 6927 KB  
Article
Estimation of Missing DICOM Windowing Parameters in High-Dynamic-Range Radiographs Using Deep Learning
by Mateja Napravnik, Natali Bakotić, Franko Hržić, Damir Miletić and Ivan Štajduhar
Mathematics 2025, 13(10), 1596; https://doi.org/10.3390/math13101596 - 13 May 2025
Viewed by 1000
Abstract
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, [...] Read more.
Digital Imaging and Communication in Medicine (DICOM) is a standard format for storing medical images, which are typically represented in higher bit depths (10–16 bits), enabling detailed representation but exceeding the display capabilities of standard displays and human visual perception. To address this, DICOM images are often accompanied by windowing parameters, analogous to tone mapping in High-Dynamic-Range image processing, which compress the intensity range to enhance diagnostically relevant regions. This study evaluates traditional histogram-based methods and explores the potential of deep learning for predicting window parameters in radiographs where such information is missing. A range of architectures, including MobileNetV3Small, VGG16, ResNet50, and ViT-B/16, were trained on high-bit-depth computed radiography images using various combinations of loss functions, including structural similarity (SSIM), perceptual loss (LPIPS), and an edge preservation loss. Models were evaluated based on multiple criteria, including pixel entropy preservation, Hellinger distance of pixel value distributions, and peak-signal-to-noise ratio after 8-bit conversion. The tested approaches were further validated on the publicly available GRAZPEDWRI-DX dataset. Although histogram-based methods showed satisfactory performance, especially scaling through identifying the peaks in the pixel value histogram, deep learning-based methods were better at selectively preserving clinically relevant image areas while removing background noise. Full article
Show Figures

Figure 1

34 pages, 1884 KB  
Article
SIMECK-T: An Ultra-Lightweight Encryption Scheme for Resource-Constrained Devices
by Alin-Adrian Anton, Petra Csereoka, Eugenia-Ana Capota and Răzvan-Dorel Cioargă
Appl. Sci. 2025, 15(3), 1279; https://doi.org/10.3390/app15031279 - 26 Jan 2025
Cited by 3 | Viewed by 2129
Abstract
The Internet of Things produces vast amounts of data that require specialized algorithms in order to secure them. Lightweight cryptography requires ciphers designed to work on resource-constrained devices like sensors and smart things. A new encryption scheme is introduced based on a blend [...] Read more.
The Internet of Things produces vast amounts of data that require specialized algorithms in order to secure them. Lightweight cryptography requires ciphers designed to work on resource-constrained devices like sensors and smart things. A new encryption scheme is introduced based on a blend of the best-performing algorithms, SIMECK and TEA. A selection of software-oriented Addition–Rotation–XOR (ARX) block ciphers are augmented with a dynamic substitution security layer. The performance is compared against other lightweight approaches. The US National Institute of Standards and Technology (NIST) SP800-22 Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications and the German AIS.31 of the Federal Office for Information Security (BSI) are used to validate the output of the proposed encryption scheme. The law of iterated logarithm (LIL) for randomness is verified in all three forms. The total variance (TV), the Hellinger Distance (HD), and the root-mean-square deviation (RMSD) show values smaller than the required limit for 10.000 sequences of ciphertext. The performance evaluation is analyzed on a Raspberry PICO 2040. Several security metrics are compared against other ciphers, like χ2 and encryption quality (EQ). The results show that SIMECK-T is a powerful and fast, software-oriented, lightweight cryptography solution. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

34 pages, 574 KB  
Article
Optimum Achievable Rates in Two Random Number Generation Problems with f-Divergences Using Smooth Rényi Entropy
by Ryo Nomura and Hideki Yagi
Entropy 2024, 26(9), 766; https://doi.org/10.3390/e26090766 - 6 Sep 2024
Cited by 2 | Viewed by 1070
Abstract
Two typical fixed-length random number generation problems in information theory are considered for general sources. One is the source resolvability problem and the other is the intrinsic randomness problem. In each of these problems, the optimum achievable rate with respect to the given [...] Read more.
Two typical fixed-length random number generation problems in information theory are considered for general sources. One is the source resolvability problem and the other is the intrinsic randomness problem. In each of these problems, the optimum achievable rate with respect to the given approximation measure is one of our main concerns and has been characterized using two different information quantities: the information spectrum and the smooth Rényi entropy. Recently, optimum achievable rates with respect to f-divergences have been characterized using the information spectrum quantity. The f-divergence is a general non-negative measure between two probability distributions on the basis of a convex function f. The class of f-divergences includes several important measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence, it is meaningful to consider the random number generation problems with respect to f-divergences. However, optimum achievable rates with respect to f-divergences using the smooth Rényi entropy have not been clarified yet in both problems. In this paper, we try to analyze the optimum achievable rates using the smooth Rényi entropy and to extend the class of f-divergence. To do so, we first derive general formulas of the first-order optimum achievable rates with respect to f-divergences in both problems under the same conditions as imposed by previous studies. Next, we relax the conditions on f-divergence and generalize the obtained general formulas. Then, we particularize our general formulas to several specified functions f. As a result, we reveal that it is easy to derive optimum achievable rates for several important measures from our general formulas. Furthermore, a kind of duality between the resolvability and the intrinsic randomness is revealed in terms of the smooth Rényi entropy. Second-order optimum achievable rates and optimistic achievable rates are also investigated. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

22 pages, 344 KB  
Article
New Improvements of the Jensen–Mercer Inequality for Strongly Convex Functions with Applications
by Muhammad Adil Khan, Slavica Ivelić Bradanović and Haitham Abbas Mahmoud
Axioms 2024, 13(8), 553; https://doi.org/10.3390/axioms13080553 - 14 Aug 2024
Cited by 2 | Viewed by 1538
Abstract
In this paper, we use the generalized version of convex functions, known as strongly convex functions, to derive improvements to the Jensen–Mercer inequality. We achieve these improvements through the newly discovered characterizations of strongly convex functions, along with some previously known results about [...] Read more.
In this paper, we use the generalized version of convex functions, known as strongly convex functions, to derive improvements to the Jensen–Mercer inequality. We achieve these improvements through the newly discovered characterizations of strongly convex functions, along with some previously known results about strongly convex functions. We are also focused on important applications of the derived results in information theory, deducing estimates for χ-divergence, Kullback–Leibler divergence, Hellinger distance, Bhattacharya distance, Jeffreys distance, and Jensen–Shannon divergence. Additionally, we prove some applications to Mercer-type power means at the end. Full article
(This article belongs to the Special Issue Analysis of Mathematical Inequalities)
24 pages, 1216 KB  
Article
Empirical Squared Hellinger Distance Estimator and Generalizations to a Family of α-Divergence Estimators
by Rui Ding and Andrew Mullhaupt
Entropy 2023, 25(4), 612; https://doi.org/10.3390/e25040612 - 4 Apr 2023
Cited by 6 | Viewed by 5108
Abstract
We present an empirical estimator for the squared Hellinger distance between two continuous distributions, which almost surely converges. We show that the divergence estimation problem can be solved directly using the empirical CDF and does not need the intermediate step of estimating the [...] Read more.
We present an empirical estimator for the squared Hellinger distance between two continuous distributions, which almost surely converges. We show that the divergence estimation problem can be solved directly using the empirical CDF and does not need the intermediate step of estimating the densities. We illustrate the proposed estimator on several one-dimensional probability distributions. Finally, we extend the estimator to a family of estimators for the family of α-divergences, which almost surely converge as well, and discuss the uniqueness of this result. We demonstrate applications of the proposed Hellinger affinity estimators to approximately bounding the Neyman–Pearson regions. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

21 pages, 370 KB  
Article
Survey of Distances between the Most Popular Distributions
by Mark Kelbert
Analytics 2023, 2(1), 225-245; https://doi.org/10.3390/analytics2010012 - 1 Mar 2023
Cited by 7 | Viewed by 5995
Abstract
We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial [...] Read more.
We present a number of upper and lower bounds for the total variation distances between the most popular probability distributions. In particular, some estimates of the total variation distances in the cases of multivariate Gaussian distributions, Poisson distributions, binomial distributions, between a binomial and a Poisson distribution, and also in the case of negative binomial distributions are given. Next, the estimations of Lévy–Prohorov distance in terms of Wasserstein metrics are discussed, and Fréchet, Wasserstein and Hellinger distances for multivariate Gaussian distributions are evaluated. Some novel context-sensitive distances are introduced and a number of bounds mimicking the classical results from the information theory are proved. Full article
Show Figures

Figure 1

16 pages, 1682 KB  
Article
An Intuitionistic Fuzzy Version of Hellinger Distance Measure and Its Application to Decision-Making Process
by Xiang Li, Zhe Liu, Xue Han, Nan Liu and Weihua Yuan
Symmetry 2023, 15(2), 500; https://doi.org/10.3390/sym15020500 - 14 Feb 2023
Cited by 28 | Viewed by 2270
Abstract
Intuitionistic fuzzy sets (IFSs), as a representative variant of fuzzy sets, has substantial advantages in managing and modeling uncertain information, so it has been widely studied and applied. Nevertheless, how to perfectly measure the similarities or differences between IFSs is still an open [...] Read more.
Intuitionistic fuzzy sets (IFSs), as a representative variant of fuzzy sets, has substantial advantages in managing and modeling uncertain information, so it has been widely studied and applied. Nevertheless, how to perfectly measure the similarities or differences between IFSs is still an open question. The distance metric offers an elegant and desirable solution to such a question. Hence, in this paper, we propose a new distance measure, named DIFS, inspired by the Hellinger distance in probability distribution space. First, we provide the formal definition of the new distance measure of IFSs, and analyze the outstanding properties and axioms satisfied by DIFS, which means it can measure the difference between IFSs well. Besides, on the basis of DIFS, we further present a normalized distance measure of IFSs, denoted DIFS˜. Moreover, numerical examples verify that DIFS˜ can obtain more reasonable and superior results. Finally, we further develop a new decision-making method on top of DIFS˜ and evaluate its performance in two applications. Full article
(This article belongs to the Special Issue Recent Developments on Fuzzy Sets Extensions)
Show Figures

Figure 1

10 pages, 2287 KB  
Article
Hellinger Information Matrix and Hellinger Priors
by Arkady Shemyakin
Entropy 2023, 25(2), 344; https://doi.org/10.3390/e25020344 - 13 Feb 2023
Cited by 2 | Viewed by 2007
Abstract
Hellinger information as a local characteristic of parametric distribution families was first introduced in 2011. It is related to the much older concept of the Hellinger distance between two points in a parametric set. Under certain regularity conditions, the local behavior of the [...] Read more.
Hellinger information as a local characteristic of parametric distribution families was first introduced in 2011. It is related to the much older concept of the Hellinger distance between two points in a parametric set. Under certain regularity conditions, the local behavior of the Hellinger distance is closely connected to Fisher information and the geometry of Riemann manifolds. Nonregular distributions (non-differentiable distribution densities, undefined Fisher information or denisities with support depending on the parameter), including uniform, require using analogues or extensions of Fisher information. Hellinger information may serve to construct information inequalities of the Cramer–Rao type, extending the lower bounds of the Bayes risk to the nonregular case. A construction of non-informative priors based on Hellinger information was also suggested by the author in 2011. Hellinger priors extend the Jeffreys rule to nonregular cases. For many examples, they are identical or close to the reference priors or probability matching priors. Most of the paper was dedicated to the one-dimensional case, but the matrix definition of Hellinger information was also introduced for higher dimensions. Conditions of existence and the nonnegative definite property of Hellinger information matrix were not discussed. Hellinger information for the vector parameter was applied by Yin et al. to problems of optimal experimental design. A special class of parametric problems was considered, requiring the directional definition of Hellinger information, but not a full construction of Hellinger information matrix. In the present paper, a general definition, the existence and nonnegative definite property of Hellinger information matrix is considered for nonregular settings. Full article
Show Figures

Figure 1

12 pages, 4690 KB  
Article
Spatio-Temporal Niche of Sympatric Tufted Deer (Elaphodus cephalophus) and Sambar (Rusa unicolor) Based on Camera Traps in the Gongga Mountain National Nature Reserve, China
by Zhiyuan You, Bigeng Lu, Beibei Du, Wei Liu, Yong Jiang, Guangfa Ruan and Nan Yang
Animals 2022, 12(19), 2694; https://doi.org/10.3390/ani12192694 - 7 Oct 2022
Cited by 10 | Viewed by 3875
Abstract
Clarifying the distribution pattern and overlapping relationship of sympatric relative species in the spatio-temporal niche is of great significance to the basic theory of community ecology and integrated management of multi-species habitats in the same landscape. In this study, based on a 9-year [...] Read more.
Clarifying the distribution pattern and overlapping relationship of sympatric relative species in the spatio-temporal niche is of great significance to the basic theory of community ecology and integrated management of multi-species habitats in the same landscape. In this study, based on a 9-year dataset (2012–2021) from 493 camera-trap sites in the Gongga Mountain National Nature Reserve, we analyzed the habitat distributions and activity patterns of tufted deer (Elaphodus cephalophus) and sambar (Rusa unicolor). (1) Combined with 235 and 153 valid presence sites of tufted deer and sambar, the MaxEnt model was used to analyze the distribution of the two species based on 11 ecological factors. The distribution areas of the two species were 1038.40 km2 and 692.67 km2, respectively, with an overlapping area of 656.67 km2. Additionally, the overlap indexes Schoener’s D (D) and Hellinger’s-based I (I) were 0.703 and 0.930, respectively. (2) Based on 10,437 and 5203 independent captures of tufted deer and sambar, their daily activity rhythms were calculated by using the kernel density estimation. The results showed that the daily activity peak in the two species appeared at dawn and dusk; however, the activity peak in tufted deer at dawn and dusk was later and earlier than sambar, respectively. Our findings revealed the spatio-temporal niche relationship between tufted deer and sambar, contributing to a further understanding of the coexistence mechanism and providing scientific information for effective wild animal conservation in the reserve and other areas in the southeastern edge of the Qinghai–Tibetan Plateau. Full article
(This article belongs to the Special Issue Use of Camera Trap for a Better Wildlife Monitoring and Conservation)
Show Figures

Figure 1

21 pages, 4188 KB  
Article
A Multi-Sensor Data-Fusion Method Based on Cloud Model and Improved Evidence Theory
by Xinjian Xiang, Kehan Li, Bingqiang Huang and Ying Cao
Sensors 2022, 22(15), 5902; https://doi.org/10.3390/s22155902 - 7 Aug 2022
Cited by 15 | Viewed by 3935
Abstract
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, [...] Read more.
The essential factors of information-aware systems are heterogeneous multi-sensory devices. Because of the ambiguity and contradicting nature of multi-sensor data, a data-fusion method based on the cloud model and improved evidence theory is proposed. To complete the conversion from quantitative to qualitative data, the cloud model is employed to construct the basic probability assignment (BPA) function of the evidence corresponding to each data source. To address the issue that traditional evidence theory produces results that do not correspond to the facts when fusing conflicting evidence, the three measures of the Jousselme distance, cosine similarity, and the Jaccard coefficient are combined to measure the similarity of the evidence. The Hellinger distance of the interval is used to calculate the credibility of the evidence. The similarity and credibility are combined to improve the evidence, and the fusion is performed according to Dempster’s rule to finally obtain the results. The numerical example results show that the proposed improved evidence theory method has better convergence and focus, and the confidence in the correct proposition is up to 100%. Applying the proposed multi-sensor data-fusion method to early indoor fire detection, the method improves the accuracy by 0.9–6.4% and reduces the false alarm rate by 0.7–10.2% compared with traditional and other improved evidence theories, proving its validity and feasibility, which provides a certain reference value for multi-sensor information fusion. Full article
(This article belongs to the Special Issue Data, Signal and Image Processing and Applications in Sensors II)
Show Figures

Figure 1

15 pages, 2199 KB  
Article
Cross-Domain Active Learning for Electronic Nose Drift Compensation
by Fangyu Sun, Ruihong Sun and Jia Yan
Micromachines 2022, 13(8), 1260; https://doi.org/10.3390/mi13081260 - 5 Aug 2022
Cited by 14 | Viewed by 2965
Abstract
The problem of drift in the electronic nose (E-nose) is an important factor in the distortion of data. The existing active learning methods do not take into account the misalignment of the data feature distribution between different domains due to drift when selecting [...] Read more.
The problem of drift in the electronic nose (E-nose) is an important factor in the distortion of data. The existing active learning methods do not take into account the misalignment of the data feature distribution between different domains due to drift when selecting samples. For this, we proposed a cross-domain active learning (CDAL) method based on the Hellinger distance (HD) and maximum mean difference (MMD). In this framework, we weighted the HD with the MMD as a criterion for sample selection, which can reflect as much drift information as possible with as few labeled samples as possible. Overall, the CDAL framework has the following advantages: (1) CDAL combines active learning and domain adaptation to better assess the interdomain distribution differences and the amount of information contained in the selected samples. (2) The introduction of a Gaussian kernel function mapping aligns the data distribution between domains as closely as possible. (3) The combination of active learning and domain adaptation can significantly suppress the effects of time drift caused by sensor ageing, thus improving the detection accuracy of the electronic nose system for data collected at different times. The results showed that the proposed CDAL method has a better drift compensation effect compared with several recent methodological frameworks. Full article
(This article belongs to the Special Issue Electronic Noses: Principles and Applications)
Show Figures

Figure 1

26 pages, 1548 KB  
Article
Discriminant Analysis under f-Divergence Measures
by Anmol Dwivedi, Sihui Wang and Ali Tajer
Entropy 2022, 24(2), 188; https://doi.org/10.3390/e24020188 - 27 Jan 2022
Cited by 6 | Viewed by 3841
Abstract
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a statistical divergence between the underlying statistical models (e.g., in binary hypothesis testing, the error probability is related to the total variation distance between the statistical models). As the [...] Read more.
In statistical inference, the information-theoretic performance limits can often be expressed in terms of a statistical divergence between the underlying statistical models (e.g., in binary hypothesis testing, the error probability is related to the total variation distance between the statistical models). As the data dimension grows, computing the statistics involved in decision-making and the attendant performance limits (divergence measures) face complexity and stability challenges. Dimensionality reduction addresses these challenges at the expense of compromising the performance (the divergence reduces by the data-processing inequality). This paper considers linear dimensionality reduction such that the divergence between the models is maximally preserved. Specifically, this paper focuses on Gaussian models where we investigate discriminant analysis under five f-divergence measures (Kullback–Leibler, symmetrized Kullback–Leibler, Hellinger, total variation, and χ2). We characterize the optimal design of the linear transformation of the data onto a lower-dimensional subspace for zero-mean Gaussian models and employ numerical algorithms to find the design for general Gaussian models with non-zero means. There are two key observations for zero-mean Gaussian models. First, projections are not necessarily along the largest modes of the covariance matrix of the data, and, in some situations, they can even be along the smallest modes. Secondly, under specific regimes, the optimal design of subspace projection is identical under all the f-divergence measures considered, rendering a degree of universality to the design, independent of the inference problem of interest. Full article
Show Figures

Figure 1

12 pages, 288 KB  
Article
Update of Prior Probabilities by Minimal Divergence
by Jan Naudts
Entropy 2021, 23(12), 1668; https://doi.org/10.3390/e23121668 - 11 Dec 2021
Cited by 1 | Viewed by 2138
Abstract
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or [...] Read more.
The present paper investigates the update of an empirical probability distribution with the results of a new set of observations. The update reproduces the new observations and interpolates using prior information. The optimal update is obtained by minimizing either the Hellinger distance or the quadratic Bregman divergence. The results obtained by the two methods differ. Updates with information about conditional probabilities are considered as well. Full article
Show Figures

Figure 1

121 pages, 1378 KB  
Article
Some Dissimilarity Measures of Branching Processes and Optimal Decision Making in the Presence of Potential Pandemics
by Niels B. Kammerer and Wolfgang Stummer
Entropy 2020, 22(8), 874; https://doi.org/10.3390/e22080874 - 8 Aug 2020
Cited by 3 | Viewed by 3855
Abstract
We compute exact values respectively bounds of dissimilarity/distinguishability measures–in the sense of the Kullback-Leibler information distance (relative entropy) and some transforms of more general power divergences and Renyi divergences–between two competing discrete-time Galton-Watson branching processes with immigration GWI for which the offspring as [...] Read more.
We compute exact values respectively bounds of dissimilarity/distinguishability measures–in the sense of the Kullback-Leibler information distance (relative entropy) and some transforms of more general power divergences and Renyi divergences–between two competing discrete-time Galton-Watson branching processes with immigration GWI for which the offspring as well as the immigration (importation) is arbitrarily Poisson-distributed; especially, we allow for arbitrary type of extinction-concerning criticality and thus for non-stationarity. We apply this to optimal decision making in the context of the spread of potentially pandemic infectious diseases (such as e.g., the current COVID-19 pandemic), e.g., covering different levels of dangerousness and different kinds of intervention/mitigation strategies. Asymptotic distinguishability behaviour and diffusion limits are investigated, too. Full article
Show Figures

Figure 1

Back to TopTop