Multi-Additivity in Kaniadakis Entropy

It is known that Kaniadakis entropy, a generalization of the Shannon–Boltzmann–Gibbs entropic form, is always super-additive for any bipartite statistically independent distributions. In this paper, we show that when imposing a suitable constraint, there exist classes of maximal entropy distributions labeled by a positive real number ℵ>0 that makes Kaniadakis entropy multi-additive, i.e., Sκ[pA∪B]=(1+ℵ)Sκ[pA]+Sκ[pB], under the composition of two statistically independent and identically distributed distributions pA∪B(x,y)=pA(x)pB(y), with reduced distributions pA(x) and pB(y) belonging to the same class.


Introduction
A possible generalization of conventional statistics, named κ-statistics, is founded on Kaniadakis entropy (κ-entropy) [1][2][3].This is a continuous one-parameter deformation of the information functional, also known as the Shannon-Boltzmann-Gibbs (SBG) entropic form, defined in in the appropriate dimensionless unities, where D is a suitable integration domain and is a deformed version of the standard logarithm that, in the κ → 0 limit, reduces to the ordinary logarithm: ln 0 (x) ≡ ln(x).Is then clear that, in the same limit, entropy S κ reproduces also the standard expression of SBG-entropy.
As is known [57], for a joined statistical system described by the bipartite probability distribution p A∪B (x, y) of two statistically independent distributions p A (x) and p B (y), i.e., p A∪B (x, y) = p A (x) p B (y), the κ-entropy S κ p A∪B is a super-additive quantity, being ( The difference between the total entropy of a joined system A∪B, and the sum of the entropies of the single parts A and B is sometimes called the entropic excess.This is defined in where, depending on the entropy nature, it can be a positive or negative quantity.It quantifies the information gain or loss in a bipartite system, a property that may be related to the concept of super-stability or sub-stability in thermodynamics [58].Entropy is superadditive, and the systems it describes are thermodynamically super-stable if the entropic excess is positive.On the other hand, entropy is sub-additive, and the systems it describes are thermodynamically sub-stable if the entropic excess is negative.Following [59], in agreement with the second principle of thermodynamics, superstable systems (with a positive entropic excess) tend to join together while sub-stable systems (with a negative entropic excess) tend to fragment.
The opposite of the entropic excess is called the entropic defect.Recently, the entropic defect has been investigated as a basic concept of thermodynamics which is able to characterize the entropic form describing a given physical system [60,61].
As discussed in [9], the entropic excess of the κ-entropy for any pair of statistically independent distributions is always positive (cf.Equation ( 3)), indicating that, in this case, it is a super-additive quantity useful for characterizing super-stable systems.
In general, the entropic excess depends on the bipartite distribution p A∪B , and cannot be quantified in a precise manner.In this work, we show that, within κ-statistics, there are classes of maximal entropy probability distributions, labeled by a real positive parameter ℵ > 0, such that the entropic excess of a statistically independent bipartite system is proportional to the sum of the entropy of the single distributions.Thus, for any pair of probability distribution functions (pdf) belonging to the same ℵ-class, we have so that the joint κ-entropy S κ p A∪B turns out to be related directly to the sum of the κ-entropies of the single distributions, according to the relation We call this propriety multi-additivity of κ-entropy.The structure of this paper is as follows.In Section 2, we present the mathematical background related to κ-statistics and the proprieties of composability of κ-entropy for a bipartite statistically independent system.Section 3 contains our main results.There, we introduce the multi-additivity of κ-entropy, and investigate the variational problem concerning the maximization of the κ-entropy under the usual constraints given by the distribution momenta and the multi-additivity conditions.In Section 4, we show by a numerical evaluation that the problem admits solutions at least within the family of Gibbs-like distributions.Finally, Section 5 contains our conclusive comments.

Mathematical Background
To start with, let us consider the following functional-differential equation [1]: where λ and α are two scaling parameters to be determined.It is easy to verify that a solution of Equation (7), with the boundary conditions Λ(1) = 0 and (d/d x) Λ(x) x=1 = 1, is given by the κ-logarithm (2), provide the scaling constants are set as As will be clarified in a follow-up, the constant λ plays the role of the scaling factor in the argument of pdf, while the constant α is a κ-deformed version of the reciprocal Neperian number.In the κ → 0 limit, λ → 1 and α → 1/e.The κ-logarithm, defined in ℜ + → ℜ, is symmetric in κ → −κ, with ln κ (1) = 0, lim x→+∞ ln κ (x) → +∞, as well as lim x→0 ln κ (x) → −∞.Furthermore, it is a continuous, strictly increasing (d/dx) ln κ (x) > 0 and concave (d 2 /dx 2 ) ln κ (x) > 0 function for |κ| < 1.More importantly, is a well-known propriety of standard logarithm that is preserved in its κ-deformed version.Finally, since it is in the κ → 0 limit, κ-logarithm collapses to the standard logarithm function; this legitimates us considering the κ-logarithm a faithful generalization of the logarithmic function.
Again, the well-known propriety of the standard exponential is also satisfied by its deformed version and, therefore, the κ-exponential is a faithful generalization of the exponential function.
En passant, we observe that the two scaling constants α and λ are related by the relations One might easily persuade oneself that the functions u κ (x) and ln κ (x) are strongly connected.In fact, two equivalent analytical expressions of these two functions are given by which show their relationship with the trigonometric hyperbolic functions.Consequently, many properties of ln κ (x) and u κ (x) follow from the corresponding properties of sinh(x) and cosh(x).In particular, it is ready to verify that as a consequence of the additivity formulas of hyperbolic functions.
In addition, it is useful to recall the following relations relating these two functions that become trivial relations (1 = 1) in the κ → 0 limit, since, in the same limit, λ → 1 and α → e −1 .By using the first of these relations, Equation ( 18) can be rewritten in which implies the relevant inequality ln κ (xy) < ln κ (x) + ln κ (y) , (22) holding in the statistically meaningful interval 0 < x, y < 1.The next step is to introduce the κ-entropy S κ [p].Like in the standard case, where SBG-entropy is defined as the negative of the linear average of the Hartley function (or surprise function) defined by h(p) = ln p(x) , it is natural to introduce the κ-entropy as the negative of the linear average of the κ-deformed Hartley function, h κ (p) = ln κ p(x) ; that is, a relation that reproduces Equation ( 1), accounting for the usual definition of the linear average of a statistical observable O(x), given by It is natural, in analogy with definition (23), to introduce the auxiliary function I κ [p], as the linear average of u κ p(x) , according to the relation This quantity is a positive definite, with I κ [p] > 1 for any normalized pdf and, in the κ → 0 limit, it gives Therefore, there is no equivalent function in standard statistics.However, like ln κ (x) and u κ (x), that are two strictly related functions, also S κ [p] and I κ [p] turn out to be recurrent in the developing of the κ-statistics.
In particular, by taking the linear average of Equations ( 18) and ( 19) for a statistically independent bipartite distribution, with p A∪B (x, y) = p A (x) p B (y), we obtain stating the additivity rule of S κ and I κ for two statistically independent systems.From Equation ( 27), we readily deduce the super-additive propriety of κ-entropy summarized in Equation (3).In particular, in the κ → 0 limit, according to (26), we recover the usual additivity rule of the SBG entropy while, in the same limit, Equation ( 28) reduces to a trivial identity.
As discussed in [9], composition rule ( 27) can be rewritten also by means of κparentropy, a quantity defined as which is a scaled version of κ-entropy.
In fact, by taking the average of Equation ( 20), we can obtain the following relationship that relates κ-entropy and κ-parentropy to the I κ function.In this way, Equation ( 27) can be rewritten in providing a composition rule for S κ that formally only includes κ-entropy; however, it is important to remark that S κ and S * κ are, actually, two independent quantities.From Equation (31), the entropic excess of the κ-entropy is given by that is a quantity defined only as a function of the κ-entropy.
Finally, let us note that Equations ( 27) and ( 28) can be combined to write the κentropy of a statistically independent multi-partite system in terms of S κ and I κ of a single distribution.For instance, given a statistically independent tri-partite system, we can obtain the relation and so on.

Multi-Additivity in Kaniadakis Entropy
Within κ-statistics, a maximal entropy pdf may be derived by maximizing the κentropy under certain appropriate boundary conditions.Quite often, they are given using linear averages of certain functions O i (x) as where i = 0, 1, . . ., M, with M + 1 being the number of given constraints.These relations fix the values of different quantities ⟨O i ⟩, related to the system under inspection, whose spectra of possible outcomes are given by O i (x).Many times, constraints are given by the momenta of a certain order n; that is, O n (x) = x n .For instance, for n = 0, we pose O 0 (x) = 1 with ⟨O 0 ⟩ = 1, which fixes the normalization of the distribution; for n = 1, we have O 1 (x) = x with ⟨O 1 ⟩ ≡ ⟨x⟩, which fixes the mean value of the distribution; for n = 2, we have O 2 (x) = x 2 with ⟨O 2 ⟩ ≡ ⟨x 2 ⟩, which is related to the variance of the distribution, etc.In this case, the maximal entropy distribution can be derived from the following variational problem: where µ i are Lagrange multipliers related to the M + 1 constraints.By accounting for Equation (7) and definition (1), we obtain the maximal entropy pdf in the form where the Lagrange multipliers µ i (⟨O 0 ⟩, ⟨O 1 ⟩, . . ., ⟨O M ⟩) are fixed throughout Equation (34) and are finally functions of the boundary conditions ⟨O i ⟩.
It is worthwhile to observe that, given the analytical expression of exp κ (x) given in (11), Distribution (36) has an asymptotic power-law behavior, i.e., for large x, where n is the order of the maximal momenta.This fact justifies the κ-statistic in the study of those anomalous systems, often complex systems, characterized by pdfs with heavy tails.
In the following, let us generalize the optimal problem described above to the case in which, in addition to relations (34), we have further constraints that are functions of the pdf itself.In particular, we seek a class of distributions, labeled by a real constant ℵ, maximizing the κ-entropy under the further constraint given by As will be shown in the next section, this class always existed whenever ℵ ≥ 0, at least for the Gibbs-like distributions, provided the constraint ⟨O 1 ⟩ ≡ ⟨x⟩ falls in a given region fixed by ℵ.
Therefore, we can state the following: for a bipartite probability distribution function p A∪B (x, y) = p A (x) p B (y) of two statistically independent and identically distributed pdfs p A (x) and p B (y), belonging to the same ℵ-class that maximizing the κ entropy under the constraint (38), we have We call this property multi-additivity, and we say that κ-entropy is (1 + ℵ)-additive whenever relation (39) holds.We observe that condition ℵ > 0 is fixed by the super-additive character of κ-entropy, while the condition ℵ = 0 admits only trivial solutions.In fact, as it is straightforward to verify, the two relations are consistent only in the trivial case of κ = 0 or for an exact distribution p(x) = δ(x).
To derive the pdf maximizing the κ-entropy under the constraints ( 34) and ( 38), we pose where ν is the Lagrange multiplier related to Equation (38).We obtain and pose From Equation (43), we obtain that, solved for p(x), gives the pdf in the form Although this distribution has the same structure as Equation ( 36), it differs from (36) in that the two functions λ(ν) and α(ν), given by now depend on the Lagrange multiplier ν.They fulfill the relations that reduce to (15) for ν = 0.
It is remarkable to note that, when accounting for the normalization of pdf, from Equation ( 46), we have a relation that suggests the role of α(ν) as a partition function, i.e., Z ≡ α(ν) −1 , in the present formalism.However, a word of caution is in order.As is well-known in standard statistics, the partition function accounted for the normalization.Thus, it is related to the corresponding Lagrange multiplier γ by the relation ln(Z) = 1 + γ.This is not the case for pdf (46), since α(ν) is related to the Lagrange multiplier of constraint ( 41), whereas Normalization ( 40) is controlled by the Lagrange multiplier µ 0 .
To convince yourself of this, it is sufficient to consider the κ → 0 limit.In this case, both Constraints ( 40) and ( 41) assume the same form, since I 0 ≡ p(x) dx, and is a constant.Therefore, in this limit, Distribution (46) becomes where, with γ = ν + µ 0 , we recover the usual definition of the partition function given above.Finally, we remark that when the distribution has Expression (46), according to Constraint (38) and by using Equations ( 49) and ( 50), we obtain which is a consistent relationship between the Lagrange multipliers and the expectation values of the present statistical model.

A Numerical Example: The Gibbs-like Distribution
To show the existence of solutions to the problem under investigation, let us consider the simplest case of a problem with M = 2. Thus, we seek a family of pdf maximizing the κ-entropy under the following constraints: corresponding, respectively, to the normalization, the linear average, and the multiadditivity constraints.
Solving the variational problem (43) in the present case, we obtain the optimizing pdf in the form This is a Gibbs-like distribution since, in the κ → 0 limit, standard Gibbs-distribution 59) is a pdf with an asymptotic power-law heavy tail, being p(x) ≈ (κ x) 1/κ for κ x ≫ 1.
For any fixed value of ℵ, real solutions only exist in certain intervals of ⟨x⟩.This is shown in Figure 1, where the region of existence of real solutions (shaded areas) of the system of Equations ( 60)-( 62) is depicted for several values of the deformation parameter κ.The case κ = 0 (not reported in the figure) corresponds to the horizontal line passing for ℵ = 0.In this case, κ-entropy becomes 1-additive for any pdf.In other words, each value of ℵ selects a class of distributions whose interval (⟨x⟩ min , ⟨x⟩ max ) determines the possible pdf for which the κ-entropy is (1 + ℵ)-additive.
In Table 1, we give some numerical values of the interval (⟨x⟩ min , ⟨x⟩ max ) for the ℵ-classes between 0.5 and 2.5, step 0.5, corresponding to the three values of the deformation parameter κ reported in the figure.As an example, let us consider the case with κ = 0.3 and ℵ = 1.0.Any pair of κ-deformed Gibbs-like distributions with 20.59 < ⟨x⟩ < 26.70 is 2-additive; that is, S κ (p A p B ) = 2 S κ (p A ) + S κ (p B ) .For instance, take ⟨x A ⟩ = 22.5 and ⟨x B ⟩ = 25.5;we can evaluate the numerical values of the Lagrange multipliers corresponding to constraints (56)- (58).
These can be read from Table 2, where we show several numerical values of the Lagrange multipliers µ 0 , µ 1 and ν, obtained from the system of Equations ( 60)- (62), for several values of constraints ℵ and ⟨x⟩ belonging to the allowed region, and corresponding to the three values of the deformation parameter κ reported in the figure .From this table, we can obtain the terna of multiplier values (−2.1989, 0.01849, −0.8219), corresponding to the distribution p A (22.5), and (−7.3990, 0.009404, −1.0796), corresponding to the two distributions p B (25.5).Then, the respective values of κ-entropy of these two distributions p A and p B are readily evaluated in S κ (p A ) = 5.63627 and S κ (p B ) = 5.70258, while the value of κ-entropy for the join system S κ p A∪B ) = 22.6777, which is exactly the attended result.

Conclusions
In this work, we showed that, within κ-statistics, there exist classes of pdf that maximize κ-entropy under the condition of constant I κ [p], a problem that admits a solution at least for the family of Gibbs-like distributions.In this way, for any pair of distributions belonging to the same ℵ-class, fixed by the real number ℵ > 0, κ-entropy turns out to be (1 + ℵ)-additive; that is, the value of κ-entropy of a bipartite statistically independent distribution, whose reduced belonging to the same ℵ-class is a multiple of the sum of the single κ-entropy according to Equation (39).
Equivalently, for any pair of distributions belonging to the same ℵ-class, the entropic excess is proportional to the sum of the κ-entropy of the single pdfs, according to (5).
On the physical ground, Distribution (46) describes a statistical ensemble constrained by condition (38).While the physical meaning of functional I κ is still unclear, it seems to be related to the κ-partition function and, consequently, to the κ-free energy of the system, as discussed in [57] (see also [45]).This also applies to Distribution (59), which characterizes a canonical ensemble that is further constrained by (38).Moreover, given two independent physical systems, both members of the same ℵ-class but with different internal energy, the joined κ-entropy is an ℵ-multiple of the sum of their respective κ-entropies.This propriety could be useful for studying thermal and mechanical equilibrium, where the composability of entropy plays a role [62].However, the potential impact that multi-additivity might have on this aspect of the κ-thermostatistic deserves further investigation.
Furthermore, looking at Equations ( 27) and ( 30), we see that the entropic excess in κ-statistics is related to the difference between the κ-entropy and κ-parentropy.In the κ = 0 case (standard statistics), such a difference is always null (ℵ = 0), i.e., parentropy and entropy have a constant gap equal to 1 for any pdf.Otherwise, when κ > 0, the difference between the κ-entropy and κ-parentropy depends on pdf.As shown in this paper, there exist classes of distributions that optimize the κ-entropy under constraint (38), such that the difference between the κ-entropy and κ-parentropy is fixed and equal to ℵ for any pdf belonging to the same ℵ-class.
In other words, the difference between distribution (36) and distribution (46) can be stated as follows: the former assigns distinct values for S κ and I κ , as these functionals both depend on the expectation values ⟨O i ⟩.In contrast, the latter assigns distinct values for S κ , but assumes a constant value for I κ = 1 + ℵ, fixed prior, for any distribution that falls within the same ℵ-class.In this way, the entropic excess turns out to be proportional to the sum of the κ-entropies of the two systems that are members of the same ℵ-class.

Figure 1 .
Figure 1.In the figure, we plotted the region of real solutions of system (60)-(62) in the plane of constraints, for several values of the deformation parameter κ.The shaded areas represent the admissible domine.

Table 1 .
Permitted interval (⟨x⟩ min , ⟨x⟩ max ) for the ℵ-classes between 0.5 and 2.5, step 0.5, corresponding to the three values of the deformation parameter κ reported in the figure.

Table 2 .
(62)ral numerical values of the Lagrange multipliers, µ 0 , µ 1 and ν, obtained from the system (60)-(62), for some values of constraints ℵ and ⟨x⟩ in the allowed region, corresponding to the three values of the deformation parameter κ reported in the figure.