Some Properties of Weighted Tsallis and Kaniadakis Divergences

We are concerned with the weighted Tsallis and Kaniadakis divergences between two measures. More precisely, we find inequalities between these divergences and Tsallis and Kaniadakis logarithms, prove that they are limited by similar bounds with those that limit Kullback–Leibler divergence and show that are pseudo-additive.


Introduction
Shannon entropy, in the form we know it, was introduced by Boltzmann and used by Shannon in the context of Information Theory. This entropy has applications in Statistical Thermodynamics, Combinatorics and Machine Learning. In Machine Learning, Shannon entropy represents the basis for building decision trees and fitting classification models.
In the last couple of years, many generalizations of Shannon entropy appeared: Tsallis entropy, Kaniadakis entropy, Rényi entropy, Varma entropy, weighted entropy, relative entropy, cumulative entropy, etc. These entropies have applications in areas such as Physics, Information Theory, Probabilities, Communication Theory, and Statistics.
S. Kullback and R.A. Leibler were concerned "to measure" "the distance" or "the divergence" between statistical populations and they generalized Shannon entropy by defining, in [22], a nonsymmetric measure, called Kullback-Leibler divergence. This divergence between two probability measures µ 1 and µ 2 on a measurable non-negligible set A is additive, non-negative and greater than log , where log is the classical logarithm function. Divergences are a key tool in Information Geometry (see [23]). The goodness of fit test is based on the Corrected Weighted Kullback-Leibler divergence (see [24]) and, as a consequence, it inherits all special characteristics of this divergence measure. Narowcki and Harding proposed the use of weighted entropy as a measure of investment risk (see [25]). Afterwards, Guiaşu used the weighted entropy to group data with respect to the importance of specific regions of the domain (see [26]), Di Crescenzo and Longobardi propose the weighted residual and past entropies (see [27]) and Suhov and Zohren proposed the quantum version of weighted entropy and its properties in Quantum Statistical Mechanics (see [28]).
Working with the Kullback-Leibler divergence formula and using the same technique like in the cases of Tsallis and Kaniadakis entropies (i.e., classical logarithm is replaced by Tsallis logarithm, respectively, by Kaniadakis logarithm), Tsallis and Kaniadakis divergences were introduced in some papers (see [29][30][31][32]).
Motivated by the aforementioned facts and by the papers [33][34][35], we deal with the weighted Tsallis and Kaniadakis divergences in this article.
In the following, we briefly describe the structure of the paper. Section 2 is dedicated to preliminaries. In Section 3, using some inequalities concerning the Tsallis logarithm, we obtain inequalities between the weighted Tsallis and Kaniadakis divergences on a non-negligible measurable arbitrary set and Tsallis logarithm, respectively, Kaniadakis logarithm (see Theorem 1). Finally, we prove that the weighted Tsallis and Kaniadakis divergences are limited by bounds that are similar to those that limit Kullback-Leibler divergence (see Theorem 2). In Section 4, we define the weighted Tsallis and Kaniadakis divergences for product measure spaces and prove some pseudo-additivity properties for them (see Theorem 3).

Preliminary Facts
Definition 1. Let k ∈ R * . We consider the Tsallis logarithm given by and the Kaniadakis logarithm given via

Remark 1.
It is easy to see that log K k x = 1 2 log T k x + log T −k x for any x ≥ 0. We have lim k→0 log T k x = lim k→0 log K k x = log x for any x > 0 ("log" is the classical logarithm function).

Definition 2. Let
(Ω, T ) be a measurable space and µ, ν : T → R + = [0, ∞) ∪ {∞} two measures. We say that µ is absolutely continuous with respect to ν if, for any A ∈ T such that ν(A) = 0, one has µ(A) = 0. Notation 1. If µ and ν are absolutely continuous with respect to each other, we denote this fact by µ ∼ ν.
In the absence of other mentions, we work in the following scenario: Let (Ω, T , µ i ), i = 1, 2 be two measure spaces and λ a measure on (Ω, T ) such that µ i ∼ λ for any i = 1, 2. With the help of Radon-Nikodým Theorem we find two non-negative measurable functions f 1 and f 2 defined on Ω such that µ i (A) = A f i dλ for any A ∈ T and any i = 1, 2. Consider w : Ω → (0, ∞) a weight function (i.e., w is a non-negative measurable function).

Definition 3.
Let A ∈ T . The weighted Tsallis divergence on A between µ 1 and µ 2 is defined via and the weighted Kaniadakis divergence on A between µ 1 and µ 2 is given by

Remark 2.
We assume that all divergences and integrals which appear in this paper are finite.

Remark 3.
We can see that the values of D w,T k (µ 1 |µ 2 , A) and D w,K k (µ 1 |µ 2 , A) do not depend on the choice of reference measure λ (because

Bounds of the Weighted Tsallis and Kaniadakis Divergences
The proof of the following lemma is elementary and is omitted.
The function ϕ x is strictly increasing.
The next two corollaries are very useful in this article.
and the equality is valid if and only if x = 1.
We have equality in these inequalities if and only if x = 1.
In both cases, the equality holds if and only if Proof. (a) We will make the proof in two steps.
Step 1. Assume that The equality holds if and only if Step 2. Let A ∈ T with µ i (A) = 0 for any i = 1, 2. We define the measures µ 1 and µ 2 via for any B ∈ T and any i = 1, 2.
We remark that Hence the weighted Tsallis divergence between µ 1 and µ 2 on A is We deduce from The equality holds if and only if for ω ∈ A, i.e., if and We have equality in the preceding inequality if and only if  and these are equivalent to (a) Assume that k ∈ − 1 2 , 1 \ {0}. Then, Proof. (a) We have (see Corollary 2) On the other hand (see again Corollary 2), (b) Using (a) we obtain Hence,

Pseudo-Additivity of the Weighted Tsallis and Kaniadakis Divergences
Let (Ω, T , µ i ), i = 1, 2 be two measure spaces and λ 1 a measure on (Ω, T ) such that µ i ∼ λ 1 for any i = 1, 2. We consider Radon-Nikodým derivatives f Let also (S, S, ν j ), j = 1, 2 be two measure spaces and λ 2 a measure on (S, S) such that ν j ∼ λ 2 for any j = 1, 2. We apply Radon-Nikodým Theorem and find the non-negative measurable functions f 1 and f 2 defined on S such that f (2) j = dν j dλ 2 for any j = 1, 2. We take w 1 : Ω → (0, ∞) and w 2 : S → (0, ∞) two weight functions. We consider the measure λ on (Ω × S, T × S) induced by λ 1 and λ 2 . Because µ i × ν i is absolutely continuous with respect to λ, we apply Radon-Nikodým Theorem and find two non-negative measurable functions f 1 and f 2 on Ω × S such that The uniqueness from Radon-Nikodým Theorem assures us that i (s) for any ω ∈ Ω, s ∈ S and any i = 1, 2. Let A ∈ T and B ∈ S. We define the weighted Tsallis divergence for product measures via The weighted Kaniadakis divergence for product measures is given by Lemma 2 (see [41]). We have the following pseudo-additivity property for Tsallis logarithm (valid for any x, y > 0): Proof. (a) According to Lemma 2, we have

Conclusions
With the help of some inequalities concerning Tsallis logarithm, we obtained inequalities between the weighted Tsallis and Kaniadakis divergences on an arbitrary measurable non-negligible set and Tsallis logarithm, respectively, Kaniadakis logarithm (Theorem 1). We showed that the aforementioned divergences are limited by similar bounds with those that limit Kullback-Leibler divergence (Theorem 2) and proved that are pseudo-additive (Theorem 3).