Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = chernoff information

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 495 KiB  
Article
Performance Analysis of Maximum Likelihood Detection in Cooperative DF MIMO Systems with One-Bit ADCs
by Tae-Kyoung Kim
Mathematics 2025, 13(15), 2361; https://doi.org/10.3390/math13152361 - 23 Jul 2025
Viewed by 230
Abstract
This paper investigates the error performance of cooperative decode-and-forward (DF) multiple-input multiple-output (MIMO) systems employing one-bit analog-to-digital converters (ADCs) over Rayleigh fading channels. In cooperative DF MIMO systems, detection errors at the relay may propagate to the destination, thereby degrading overall detection performance. [...] Read more.
This paper investigates the error performance of cooperative decode-and-forward (DF) multiple-input multiple-output (MIMO) systems employing one-bit analog-to-digital converters (ADCs) over Rayleigh fading channels. In cooperative DF MIMO systems, detection errors at the relay may propagate to the destination, thereby degrading overall detection performance. Although joint maximum likelihood detection can efficiently mitigate error propagation by leveraging probabilistic information from a source-to-relay link, its computational complexity is impractical. To address this issue, an approximate maximum likelihood (AML) detection scheme is introduced, which significantly reduces complexity while maintaining reliable performance. However, its analysis under one-bit ADCs is challenging because of its nonlinearity. The main contributions of this paper are summarized as follows: (1) a tractable upper bound on the pairwise error probability (PEP) of the AML detector is derived using Jensen’s inequality and the Chernoff bound, (2) the asymptotic behavior of the PEP is analyzed to reveal the achievable diversity gain, (3) the analysis shows that full diversity is attained only when symbol pairs in the PEP satisfy a sign-inverted condition and the relay correctly decodes the source symbol, and (4) the simulation results verify the accuracy of the theoretical analysis and demonstrate the effectiveness of the proposed analysis. Full article
(This article belongs to the Special Issue Computational Methods in Wireless Communication)
Show Figures

Figure 1

23 pages, 720 KiB  
Article
Calibrating the Attack to Sensitivity in Differentially Private Mechanisms
by Ayşe Ünsal and Melek Önen
J. Cybersecur. Priv. 2022, 2(4), 830-852; https://doi.org/10.3390/jcp2040042 - 18 Oct 2022
Cited by 3 | Viewed by 2809
Abstract
This work studies the power of adversarial attacks against machine learning algorithms that use differentially private mechanisms as their weapon. In our setting, the adversary aims to modify the content of a statistical dataset via insertion of additional data without being detected by [...] Read more.
This work studies the power of adversarial attacks against machine learning algorithms that use differentially private mechanisms as their weapon. In our setting, the adversary aims to modify the content of a statistical dataset via insertion of additional data without being detected by using the differential privacy to her/his own benefit. The goal of this study is to evaluate how easy it is to detect such attacks (anomalies) when the adversary makes use of Gaussian and Laplacian perturbation using both statistical and information-theoretic tools. To this end, firstly via hypothesis testing, we characterize statistical thresholds for the adversary in various settings, which balances the privacy budget and the impact of the attack (the modification applied on the original data) in order to avoid being detected. In addition, we establish the privacy-distortion trade-off in the sense of the well-known rate-distortion function for the Gaussian mechanism by using an information-theoretic approach. Accordingly, we derive an upper bound on the variance of the attacker’s additional data as a function of the sensitivity and the original data’s second-order statistics. Lastly, we introduce a new privacy metric based on Chernoff information for anomaly detection under differential privacy as a stronger alternative for the (ϵ,δ)-differential privacy in Gaussian mechanisms. Analytical results are supported by numerical evaluations. Full article
(This article belongs to the Section Privacy)
Show Figures

Figure 1

35 pages, 988 KiB  
Article
Revisiting Chernoff Information with Likelihood Ratio Exponential Families
by Frank Nielsen
Entropy 2022, 24(10), 1400; https://doi.org/10.3390/e24101400 - 1 Oct 2022
Cited by 13 | Viewed by 5991
Abstract
The Chernoff information between two probability measures is a statistical divergence measuring their deviation defined as their maximally skewed Bhattacharyya distance. Although the Chernoff information was originally introduced for bounding the Bayes error in statistical hypothesis testing, the divergence found many other applications [...] Read more.
The Chernoff information between two probability measures is a statistical divergence measuring their deviation defined as their maximally skewed Bhattacharyya distance. Although the Chernoff information was originally introduced for bounding the Bayes error in statistical hypothesis testing, the divergence found many other applications due to its empirical robustness property found in applications ranging from information fusion to quantum information. From the viewpoint of information theory, the Chernoff information can also be interpreted as a minmax symmetrization of the Kullback–Leibler divergence. In this paper, we first revisit the Chernoff information between two densities of a measurable Lebesgue space by considering the exponential families induced by their geometric mixtures: The so-called likelihood ratio exponential families. Second, we show how to (i) solve exactly the Chernoff information between any two univariate Gaussian distributions or get a closed-form formula using symbolic computing, (ii) report a closed-form formula of the Chernoff information of centered Gaussians with scaled covariance matrices and (iii) use a fast numerical scheme to approximate the Chernoff information between any two multivariate Gaussian distributions. Full article
Show Figures

Graphical abstract

11 pages, 356 KiB  
Article
Amplification, Inference, and the Manifestation of Objective Classical Information
by Michael Zwolak
Entropy 2022, 24(6), 781; https://doi.org/10.3390/e24060781 - 1 Jun 2022
Cited by 6 | Viewed by 2216
Abstract
Our everyday reality is characterized by objective information—information that is selected and amplified by the environment that interacts with quantum systems. Many observers can accurately infer that information indirectly by making measurements on fragments of the environment. The correlations between the system, S [...] Read more.
Our everyday reality is characterized by objective information—information that is selected and amplified by the environment that interacts with quantum systems. Many observers can accurately infer that information indirectly by making measurements on fragments of the environment. The correlations between the system, S, and a fragment, F, of the environment, E, is often quantified by the quantum mutual information, or the Holevo quantity, which bounds the classical information about S transmittable by a quantum channel F. The latter is a quantum mutual information but of a classical-quantum state where measurement has selected outcomes on S. The measurement generically reflects the influence of the remaining environment, E/F, but can also reflect hypothetical questions to deduce the structure of SF correlations. Recently, Touil et al. examined a different Holevo quantity, one from a quantum-classical state (a quantum S to a measured F). As shown here, this quantity upper bounds any accessible classical information about S in F and can yield a tighter bound than the typical Holevo quantity. When good decoherence is present—when the remaining environment, E/F, has effectively measured the pointer states of S—this accessibility bound is the accessible information. For the specific model of Touil et al., the accessible information is related to the error probability for optimal detection and, thus, has the same behavior as the quantum Chernoff bound. The latter reflects amplification and provides a universal approach, as well as a single-shot framework, to quantify records of the missing, classical information about S. Full article
(This article belongs to the Special Issue Quantum Darwinism and Friends)
Show Figures

Figure 1

22 pages, 35935 KiB  
Article
Image Retrieval via Canonical Correlation Analysis and Binary Hypothesis Testing
by Kangdi Shi, Xiaohong Liu, Muhammad Alrabeiah, Xintong Guo, Jie Lin, Huan Liu and Jun Chen
Information 2022, 13(3), 106; https://doi.org/10.3390/info13030106 - 23 Feb 2022
Viewed by 2885
Abstract
Canonical Correlation Analysis (CCA) is a classic multivariate statistical technique, which can be used to find a projection pair that maximally captures the correlation between two sets of random variables. The present paper introduces a CCA-based approach for image retrieval. It capitalizes on [...] Read more.
Canonical Correlation Analysis (CCA) is a classic multivariate statistical technique, which can be used to find a projection pair that maximally captures the correlation between two sets of random variables. The present paper introduces a CCA-based approach for image retrieval. It capitalizes on feature maps induced by two images under comparison through a pre-trained Convolutional Neural Network (CNN) and leverages basis vectors identified through CCA, together with an element-wise selection method based on a Chernoff-information-related criterion, to produce compact transformed image features; a binary hypothesis test regarding the joint distribution of transformed feature pair is then employed to measure the similarity between two images. The proposed approach is benchmarked against two alternative statistical methods, Linear Discriminant Analysis (LDA) and Principal Component Analysis with whitening (PCAw). Our CCA-based approach is shown to achieve highly competitive retrieval performances on standard datasets, which include, among others, Oxford5k and Paris6k. Full article
(This article belongs to the Special Issue Image Enhancement with Deep Learning Techniques)
Show Figures

Figure 1

17 pages, 4477 KiB  
Communication
Performance Enhancement of DWDM-FSO Optical Fiber Communication Systems Based on Hybrid Modulation Techniques under Atmospheric Turbulence Channel
by Mohammed R. Hayal, Bedir B. Yousif and Mohamed A. Azim
Photonics 2021, 8(11), 464; https://doi.org/10.3390/photonics8110464 - 22 Oct 2021
Cited by 36 | Viewed by 4128
Abstract
In this paper, we enhance the performance efficiency of the free-space optical (FSO) communication link using the hybrid on-off keying (OOK) modulation, M-ary digital pulse position modulation (M-ary DPPM), and M-pulse amplitude and position modulation (M-PAPM). This work analyzes and enhances the bit [...] Read more.
In this paper, we enhance the performance efficiency of the free-space optical (FSO) communication link using the hybrid on-off keying (OOK) modulation, M-ary digital pulse position modulation (M-ary DPPM), and M-pulse amplitude and position modulation (M-PAPM). This work analyzes and enhances the bit error rate (BER) performance of the moment generating function, modified Chernoff bound, and Gaussian approximation techniques. In the existence of both an amplified spontaneous emission (ASE) noise, atmospheric turbulence (AT) channels, and interchannel crosstalk (ICC), we propose a system model of the passive optical network (PON) wavelength division multiplexing (WDM) technique for a dense WDM (DWDM) based on the hybrid fiber FSO (HFFSO) link. We use eight wavelength channels that have been transmitted at a data rate of 2.5 Gbps over a turbulent HFFSO-DWDM system and PON-FSO optical fiber start from 1550 nm channel spacing in the C-band of 100 GHz. The results demonstrate (2.5 Gbps × 8 channels) 20 Gbit/s-4000 m transmission with favorable performance. In this design, M-ary DPPM-M-PAPM modulation is used to provide extra information bits to increase performance. We also propose to incorporate adaptive optics to mitigate the AT effect and improve the modulation efficiency. We investigate the impact of the turbulence effect on the proposed system performance based on OOK-M-ary PAPM-DPPM modulation as a function of M-ary DPPM-PAPM and other atmospheric parameters. The proposed M-ary hybrid DPPM-M-PAPM solution increases the receiver sensitivity compared to OOK, improves the reliability and achieves a lower power penalty of 0.2–3.0 dB at low coding level (M) 2 in the WDM-FSO systems for the weak turbulence. The OOK/M-ary hybrid DPPM-M-PAPM provides an optical signal-to-noise ratio of about 4–8 dB of the DWDM-HFFSO link for the strong turbulence at a target BER of 10−12. The numerical results indicate that the proposed design can be enhanced with the hybrid OOK/M-DPPM and M-PAPM for DWDM-HFFSO systems. The calculation results show that PAPM-DPPM has increased about 10–11 dB at BER of 10−12 more than the OOK-NRZ approach. The simulation results show that the proposed hybrid optical modulation technique can be used in the DWDM-FSO hybrid links for optical-wireless and fiber-optic communication systems, significantly increasing their efficiency. Finally, the use of the hybrid OOK/M-ary DPPM-M-PAPM modulation schemes is a new technique to reduce the AT, ICC, ASE noise for the DWDM-FSO optical fiber communication systems. Full article
Show Figures

Figure 1

21 pages, 678 KiB  
Article
Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding
by Wentao Huang and Kechen Zhang
Entropy 2019, 21(3), 243; https://doi.org/10.3390/e21030243 - 4 Mar 2019
Cited by 3 | Viewed by 4680
Abstract
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted [...] Read more.
Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems. Full article
Show Figures

Figure 1

22 pages, 569 KiB  
Article
Computational Information Geometry for Binary Classification of High-Dimensional Random Tensors
by Gia-Thuy Pham, Rémy Boyer and Frank Nielsen
Entropy 2018, 20(3), 203; https://doi.org/10.3390/e20030203 - 17 Mar 2018
Cited by 3 | Viewed by 4453
Abstract
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero [...] Read more.
Evaluating the performance of Bayesian classification in a high-dimensional random tensor is a fundamental problem, usually difficult and under-studied. In this work, we consider two Signal to Noise Ratio (SNR)-based binary classification problems of interest. Under the alternative hypothesis, i.e., for a non-zero SNR, the observed signals are either a noisy rank-R tensor admitting a Q-order Canonical Polyadic Decomposition (CPD) with large factors of size N q × R , i.e., for 1 q Q , where R , N q with R 1 / q / N q converge towards a finite constant or a noisy tensor admitting TucKer Decomposition (TKD) of multilinear ( M 1 , , M Q ) -rank with large factors of size N q × M q , i.e., for 1 q Q , where N q , M q with M q / N q converge towards a finite constant. The classification of the random entries (coefficients) of the core tensor in the CPD/TKD is hard to study since the exact derivation of the minimal Bayes’ error probability is mathematically intractable. To circumvent this difficulty, the Chernoff Upper Bound (CUB) for larger SNR and the Fisher information at low SNR are derived and studied, based on information geometry theory. The tightest CUB is reached for the value minimizing the error exponent, denoted by s . In general, due to the asymmetry of the s-divergence, the Bhattacharyya Upper Bound (BUB) (that is, the Chernoff Information calculated at s = 1 / 2 ) cannot solve this problem effectively. As a consequence, we rely on a costly numerical optimization strategy to find s . However, thanks to powerful random matrix theory tools, a simple analytical expression of s is provided with respect to the Signal to Noise Ratio (SNR) in the two schemes considered. This work shows that the BUB is the tightest bound at low SNRs. However, for higher SNRs, the latest property is no longer true. Full article
Show Figures

Figure 1

17 pages, 325 KiB  
Article
Estimating Mixture Entropy with Pairwise Distances
by Artemy Kolchinsky and Brendan D. Tracey
Entropy 2017, 19(7), 361; https://doi.org/10.3390/e19070361 - 14 Jul 2017
Cited by 96 | Viewed by 10021 | Correction
Abstract
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of [...] Read more.
Mixture distributions arise in many parametric and non-parametric settings—for example, in Gaussian mixture models and in non-parametric estimation. It is often necessary to compute the entropy of a mixture, but, in most cases, this quantity has no closed-form expression, making some form of approximation necessary. We propose a family of estimators based on a pairwise distance function between mixture components, and show that this estimator class has many attractive properties. For many distributions of interest, the proposed estimators are efficient to compute, differentiable in the mixture parameters, and become exact when the mixture components are clustered. We prove this family includes lower and upper bounds on the mixture entropy. The Chernoff α -divergence gives a lower bound when chosen as the distance function, with the Bhattacharyaa distance providing the tightest lower bound for components that are symmetric and members of a location family. The Kullback–Leibler divergence gives an upper bound when used as the distance function. We provide closed-form expressions of these bounds for mixtures of Gaussians, and discuss their applications to the estimation of mutual information. We then demonstrate that our bounds are significantly tighter than well-known existing bounds using numeric simulations. This estimator class is very useful in optimization problems involving maximization/minimization of entropy and mutual information, such as MaxEnt and rate distortion problems. Full article
(This article belongs to the Special Issue Information Theory in Machine Learning and Data Science)
Show Figures

Figure 1

Back to TopTop