Next Article in Journal
Analysis and Synchronous Study of a Five-Dimensional Multistable Memristive Chaotic System with Bidirectional Offset Increments
Previous Article in Journal
An Automated Decision Support System for Portfolio Allocation Based on Mutual Information and Financial Criteria
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quantum κ-Entropy: A Quantum Computational Approach

by
Demosthenes Ellinas
1,* and
Giorgio Kaniadakis
2,3,4
1
School of ECE QLab, Technical University of Crete, 731 00 Chania, Greece
2
Dipartimento di Scienza Applicata e Tecnologia, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy
3
Istituto dei Sistemi Complessi, Consiglio Nazionale di Ricerca, 00185 Rome, Italy
4
Sezione di Torino, Istituto Nazionale di Fisica Nucleare, 10125 Torino, Italy
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(5), 482; https://doi.org/10.3390/e27050482
Submission received: 24 December 2024 / Revised: 1 April 2025 / Accepted: 17 April 2025 / Published: 29 April 2025
(This article belongs to the Section Statistical Physics)

Abstract

:
A novel approach to the quantum version of κ -entropy that incorporates it into the conceptual, mathematical and operational framework of quantum computation is put forward. Various alternative expressions stemming from its definition emphasizing computational and algorithmic aspects are worked out: First, for the case of canonical Gibbs states, it is shown that κ -entropy is cast in the form of an expectation value for an observable that is determined. Also, an operational method named “the two-temperatures protocol” is introduced that provides a way to obtain the κ -entropy in terms of the partition functions of two auxiliary Gibbs states with temperatures κ -shifted above, the hot-system, and κ -shifted below, the cold-system, with respect to the original system temperature. That protocol provides physical procedures for evaluating entropy for any κ . Second, two novel additional ways of expressing the κ -entropy are further introduced. One determined by a non-negativity definite quantum channel, with Kraus-like operator sum representation and its extension to a unitary dilation via a qubit ancilla. Another given as a simulation of the κ -entropy via the quantum circuit of a generalized version of the Hadamard test. Third, a simple inter-relation of the von Neumann entropy and the quantum κ -entropy is worked out and a bound of their difference is evaluated and interpreted. Also the effect on the κ -entropy of quantum noise, implemented as a random unitary quantum channel acting in the system’s density matrix, is addressed and a bound on the entropy, depending on the spectral properties of the noisy channel and the system’s density matrix, is evaluated. The results obtained amount to a quantum computational tool-box for the κ -entropy that enhances its applicability in practical problems.

1. Introduction

The κ -entropy introduced two decades ago in the trilogy of papers [1,2,3] assumes the form
S κ = 1 2 κ i = 1 W ρ i 1 κ ρ i 1 + κ ,
where ρ = { ρ i } is the probability density. The above entropy arises naturally in the context of Einstein’s special relativity and generates a self-consistent κ statistical mechanics, which turns out to be a relativistic extension of classical Boltzmann–Gibbs statistical mechanics. which is obtained in the κ 0 classical limit.
The persistent power-law tails of the cosmic rays spectrum, spanning 13 decades in terms of energy and 33 decades in terms of particle flux, turn out to be a purely relativistic effect correctly predicted by κ statistical mechanics and this result represents one of the greatest successes of the new theory.
The statistical theory based on S κ has an axiomatic structure and can also be introduced without reference to special relativity [4], since it also has applications outside of relativistic physics.
In the last two decades, many authors have studied the theoretical foundations of the underlying thermodynamics, [5,6,7,8,9,10,11,12,13] and the mathematical structure of the theory, c.f. [14,15,16,17,18,19,20,21,22,23,24,25,26].
On the other hand, specific applications of the theory have been considered in various areas of the science of complex physical, natural or artificial, classical or quantum systems. With regard to the applications concerning quantum systems, we recall the studies devoted in particular to quantum mechanics [27,28], quantum hadrodynamics [29], quantum statistical mechanics, c.f. [30,31,32], quantum gravity [33,34,35,36,37,38,39,40] and quantum cosmology, c.f. [41,42,43,44,45].
The content of the present paper frames the quantum version of κ -entropy and relates it to the conceptual, mathematical and operational framework of quantum computation. The relations developed are organized into three scopes, each one containing two propositions, that form the three main chapters of the paper, respectively. All proofs are deferred to a final Appendix A. The following outline describes the matter:
Scope 1: The aim is to consider a canonical Gibbs state density matrix for some Hamiltonian and determine an operational form for its kappa canonical quantum entropy via the expectation value of a quantum observable, Proposition 1; and further, to introduce a two-temperatures protocol for measuring the canonical state kappa entropy, Proposition 2.
Scope 2: The aim (channel generating kappa entropy) is to express the kappa entropy for a general density matrix via a positive and trace-preserving quantum channel as well as via its unitary dilation, Proposition 3; and further, to simulate the kappa entropy via a generalized form of the quantum circuit of the so-called Hadamard test, Proposition 4.
Scope 3: The aim is to relate the kappa and von Neumann quantum entropies between them and to determine the bounds on their difference, Proposition 5; and further, to examine the effect of a typical noise, i.e., a quantum random unitary channel acting on an original quantum system, by evaluating bounds on the value of the kappa entropy of the transformed density matrix. To gain full generality for the result, the bound is shown to be determined by the spectral properties of both the channel and systems’ density matrix, Proposition 6.
Motivations: κ -Entropy and its relations to other quantum entropies: Some of the motivation for the development of κ entropy in quantum information language is based on its relation to other standard quantum entropies, such as von Neumann (vNE) and Renyi entropy. Operational interpretations of vNE supporting its wide use in the quantum information field are well known, and two of them are invoked below. Here, they are useful in motivating a similar interpretation for the quantum κ entropy, c.f. Proposition 2 below. Similarly to vNE, the Renyi entropy shares common features with κ entropy, supporting the treatment of the latter in quantum language as outlined below.
vN entropy intepretation 1: Suppose that Alice prepares a quantum state ρ . Bob can then perform a particular POVM { Λ x } x X to learn about the quantum system, where X denotes the random variable corresponding to the classical output of the POVM, i.e., x Λ x . The probability density function x p X ( x ) of random variable X is then p X ( x ) = T r ( ρ Λ x ) . The Shannon entropy of the POVM Λ x is denoted by S s h ( X ) = x X p X ( x ) log p X ( x ) = x X T r ( ρ Λ x ) log T r ( ρ Λ x ) . The minimum Shannon entropy over all rank-1 POVMs is equal to a quantity which is identified with the von Neumann entropy S v N ( ρ ) of the density operator ρ . Explicitly, this optimization means that S v N ( ρ ) = min Λ x x X T r ( ρ Λ x ) log T r ( ρ Λ x ) , where the minimum is restricted to be over rank-1 POVMs, i.e., those with { Λ x = | ψ x ψ x | } x X , satisfying x X Λ x = x X | ψ x ψ x | = I , where the latter implies the completeness of states { | ψ x } x X (p. 256, [46]).
vN entropy intepretation 2: Suppose that Alice generates a quantum state | ψ x in her lab according to some probability density p X ( x ) of a random variable X. Suppose further that Bob has not yet received the state from Alice and does not know which one she sent. The expected density operator from Bob’s point of view is then ρ = E X | ψ x ψ x | = x X p X ( x ) | ψ x ψ x | . The interpretation of the entropy S v N ( ρ ) is that it quantifies Bob’s uncertainty about the state Alice sent—his expected information gain is S v N ( ρ ) qubits upon receiving and measuring the state that Alice sends (p. 254, [46]).
Remark: The development of κ entropy along the lines of quantum information put forward in this paper opens the possibility of using it to quantify, e.g., quantum entanglement, a procedure that has been carried out by other types of entropy, c.f. the entanglement Renyi α -entropy (ERαE) index, c.f. [47] and references therein. Below an outline of the important features of Renyi α -entropy and their comparison with those of κ entropy is provided that supports this line of inquiry [48].
Renyi and  κ  entropy: The presence of a logarithm function of the probability density in the expression of the von Neumann entropy T r ( ρ ln ρ ) implies that the entropy calculation requires the computation of the complete spectrum of the density matrix for its diagonalization, which can be computationally intensive for large systems. The Renyi entropy 1 1 α ln ( T r ρ α ) instead involves a power of the probability density and is often computationally easier to estimate than the von Neumann entropy. As a result, the Renyi entropy is much more efficient and accessible for simulating complex quantum systems, as it relies on power traces rather than full eigenvalue decompositions of the von Neumann entropy, leading to faster and more scalable simulations.
The Renyi entropy is very flexible and depends on a free parameter that controls the entropy value. This allows more flexible forms and different weighting schemes of the probability distribution to be considered.
An important advantage of the variability of the free Renyi parameter is that by fixing it appropriately, the entropy measure can emphasize different aspects of the probability distribution of a quantum state by focusing on the most probable components or on the less dominant contributions to better understand the complex nature of the entanglement and quantum correlations. This versatility of the Renyi entropy can be particularly beneficial in the study of critical phenomena where certain entanglement features might be obscured by a single, fixed measure, or in phenomena where entanglement scaling and phase transitions reveal subtle quantum effects.
The Renyi entropy may be more accessible in experimental and computational settings than techniques such as interference-based measurements. The advantages of the Renyi entropy over the von Neumann entropy not only allow us to better characterize quantum states and gain deeper insights into the distribution and exchange of information between entangled particles, but also to improve error analysis and algorithm optimization in quantum computing.
The properties of the Renyi entropy, which make it faster to compute and more flexible than the von Neumann entropy, are due to its expression and, in particular, its dependence on a power of the probability density with a free parameter as the exponent. Of course, other entropies have also been considered in the literature, which share with the Renyi entropy the fact that they are also constructed from powers of the probability density and therefore share the main qualitative properties of the Renyi entropy. The choice of an entropy that generalizes the von Neumann entropy and uses powers instead of the logarithm of the probability density in its definition is very difficult and subjective.
Here, the κ -entropy is considered as a new paradigm of quantum entropy for two different reasons. The first is that the κ -entropy, like the Renyi entropy, is defined from the power of the probability density and thus shares the main qualitative features of the Renyi entropy. The second reason for choosing the κ -entropy, which makes it more interesting, is that it has a physical origin. The κ parameter of entropy has its roots in Einstein’s theory of special relativity. It is a relativistic generalization of the Boltzmann entropy of classical statistical mechanics and thus of the von Neumann quantum entropy. The κ -entropy can therefore be seen as the relativistic generalization of the von Neumann entropy, which results when the parameter κ approaches zero. The study of κ -entropy in the context of quantum computing and quantum information therefore allows us to consider quantum systems that have relativistic properties. This gives us a more comprehensive view of the nature of quantum-relativistic phenomena.

2. Kappa Entropy for Canonical States

The aim of this section is to consider a canonical Gibbs state density matrix for some Hamiltonian and determine an operational form for its kappa canonical quantum entropy via the expectation value of a quantum observable, Proposition 1; and further, to introduce a two-temperatures protocol for measuring the canonical state kappa entropy of a give quantum system, Proposition 2. Consider the following,
Definition 1.
Let κ [ 0 , 1 ) and let the density ρ D N = ρ C N × N ; ρ = ρ , ρ > 0 , T r ρ = 1 , the kappa entropy reads
S κ ( ρ ) = 1 2 κ T r ( ρ κ + 1 ρ κ + 1 )
κ + 1 [ 1 , 2 ) , κ + 1 [ 1 , 0 ) .
Next, we show how the canonical state kappa entropy is expressed via the expectation value of a quantum observable.
Proposition 1.
The kappa entropy of a canonical state ρ c a n = 1 Z T e β H , β = 1 k T , is cast in the form of an expectation value of the measurement of the quantum observable C κ = 1 κ sinh ( κ ( ln Z T I + β H ) ) , in state ρ c a n , i.e.,
S κ ( ρ c a n ) = T r ρ c a n C κ = ρ c a n , C κ .
State and observable are commuting.
Next, we put forward a two-temperatures protocol for measuring the canonical state kappa entropy,
Proposition 2.
The kappa entropy of a canonical state ρ c a n = 1 Z T e β H of temperature T , and partition function Z T = T r e β H , is simulated by two quantum systems described by the same Hamiltonian each in the canonical Gibbs state of the respective κ-dependent temperatures T c o o l = T 1 + κ < T (the cool system) and T h o t = T 1 κ > T (the hot system), with corresponding partition functions Z c o o l Z T 1 + κ and Z h o t Z T 1 κ . The kappa entropy is expressed in terms of the partition functions of the simulating systems as
S κ ( ρ c a n ) = 1 2 κ 1 Z T Z T 1 + κ × 1 Z T κ Z T 1 κ × Z T κ .
The protocol: For a given canonical density matrix ρ c a n with given Hamiltonian H and reference temperature β = 1 k T , apply the following two-temperatures protocol in order to determine the kappa entropy S κ ( ρ c a n ) . Suppose the κ parameter is fixed and not adjustable, then introduce and control two temperatures: the high T h o t T κ = T 1 κ and the low T c o o l T + κ = T 1 + κ temperatures, lying above and below the reference temperature T. By varying the high-T and the low-T temperature independently and determining the partition functions indicated in the last Equation (2), it is possible to simulate the value of the kappa entropy of the initial Gibbs state for any value of T and κ . Operationally, that would require letting two copies of the original system of temperature T interact with a heat bath that will increase its temperature T T h o t = T 1 κ in the first copy and decrease its temperature T T c o o l = T 1 + κ in the second copy, and then form the combination of the partition function expressed in Equation (2).

3. Quantum Channels for κ Entropy

The aim of this section is to express the kappa entropy for a general density matrix via a positive and trace-preserving quantum channel as well as via its unitary dilation, Proposition 3 [49,50,51,52]; and further, to simulate the kappa entropy via a generalized form of the quantum circuit of the so-called Hadamard test, [53], Proposition 4. We introduce the following:
Proposition 3.
By means of the formalism of the vectorization of matrices A A and the purification of the density matrices ρ ρ , the κ entropy S κ is expressed as
S κ ( ρ ) = T r ( T r 2 E ρ ( I I ) ) ,
where for any ν C N × N C N × N , the positive semi-definite map E ρ : C N × N C N × N C N × N C N × N , is introduced as, E ρ ( ν ) = R + ( ρ ) ν R + ( ρ ) R ( ρ ) ν R ( ρ ) , with (Kraus-like) generators
R ± ( ρ ) = 1 κ ( ρ ± κ + 1 2 I ) .
Map E ρ is also expressed in an extended space, by adding an auxiliary qubit. The density matrix of the total, auxiliary+system, is defined on matrix space C 2 × 2 C N × N C N × N . Map E ρ is explicitly obtained as
E ρ ( ν ) = T r 1 U ( ρ ) σ 3 ν U ( ρ )
where the channel reads
E ρ ( ν ) = T r 1 1 κ R + ( ρ ) 1 κ R ( ρ ) ν ν 1 κ R + ( ρ ) 1 κ R ( ρ ) .
Kappa entropy via generalized Hadamard test circuit:
Devise an operational construction which will enable the implementation of the transformation ρ ± κ ρ 1 ± κ , and further generate the quantity S κ . To this end, next, we provide a quantum measurement procedure that evaluates the trace T r ρ 1 ± κ , that is based on an extension to the density matrix formalism of the idea of the so-called ’Hadamard test’, initially used for pure states (see, e.g., [53]). The quantum circuit of the proposed measurement is given below.
Notation: A d X stands for the adjoint action of operator X C N × N , as follows A d X ( . ) = X ( . ) X . E.g., let the control X gate V c X = P 0 X + P 1 I N , acting on the composite system with control on the qubit state and target on the reference system; the notation A d V c X means the adjoint action, i.e.,
A d V c X ( . ) = V c X ( . ) V c X = P 0 X + P 1 I ( . ) P 0 X + P 1 I .
Simulation of kappa entropy via generalized Hadamard test quantum circuit (Proposition 4; c.f. Figure 1).
Proposition 4.
Consider attaching to the Hilbert space C N of the reference quantum system, an auxiliary qubit so that the total state space is H C 2 C N . Next, consider the initial state 0 0 ρ , where we denote by ρ the density matrix of the system for which we want to evaluate the kappa entropy. For a matrix M E n d ( C N ) C N × N , introduce the following map S [ M ] : D ( H ) D ( H ) in the space of the density matrices D ( H ) D ( C 2 C N ) ; explicitly, for H as the Hadamard matrix, the map S [ M ] is determined by composing the transformations A d H I N and A d V c M , where both of them operate on the composite system of reference+qubit, and reads,
S [ M ] A d H I N A d V c M A d H I N .
For Ω and M , operators on the auxiliary qubit and the reference systems, respectively, define the map M T Ω [ M ] ,
T Ω [ M ] T r 1 ( Ω I N ) S [ M ] : D ( H ) D ( H ) ,
parametrized by some Ω and M . Acting on an initial state 0 0 ρ , map T Ω [ M ] yields T ± σ 3 [ ρ ± κ ] = ± ρ 1 ± κ . Denoting T ± σ 3 T ± the kappa entropy S κ ( ρ ) is generated acting on 0 0 ρ as
T r [ 1 2 κ T + [ ρ κ ] + T [ ρ κ ] ( 0 0 ρ ) ] = 1 2 κ T r [ ρ 1 + κ ρ 1 κ ] = S κ ( ρ ) .

4. Set of Values and Bounds for κ  Entropy

In this section, two questions are investigated: (i) What is the set of values of a (scaled) difference between the von Neumann and κ entropy (Proposition 6)?; and (ii) If an input density matrix is transformed by a unitary quantum channel, how is the output density matrix κ -entropy S κ affected? Explicit upper bounds on S κ are estimated that are determined by the stochastic properties of the channel generators, as well as by the spectral properties of the input-output density matrices, (Proposition 6).
Proposition 5.
The κ quantum entropy within the validity of function s i n h c π approximation, [54,55,56,57], is
S κ ( ρ ) = S v N ( ρ ) lim n T r ρ ln ρ m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m
so it reads as a κ-depended correction to the von Neumann quantum entropy.
The following bounds are applied for the kappa entropy in terms of the vN entropy:
S κ ( ρ ) S v N ( ρ ) lim n n ( 2 n + 1 ) ! λ min n u max | ( Λ ln Λ ) | u max S κ ( ρ ) S v N ( ρ ) lim n n 6 λ max u max | ( Λ ln Λ ) | u max ) .
Also, in terms of the maximal eigenvalue and eigenvector of ln ρ and by virtue of the Perron–Frobenius theorem, it is found that in the asymptotic limit ( n ) , the scaled difference of the two entropies has the interval of values:
S κ ( ρ ) S v N ( ρ ) u max | ( Λ ln Λ ) | u max ) [ 0 , ) .
Bounded change of the kappa entropy due to quantum random unitary channel (Proposition 6) (see [58,59,60]):
Proposition 6.
Let a quantum random unitary channel E that induces a transformation of ρ ρ = E ( ρ ) = i p i A i ρ A i , with A i be its unitary generators and p = ( p i ) the vector of its weights. Also, let λ , λ be the eigenvalue stochastic vectors of ρ and the ρ density matrices, respectively. The transformed density matrix is used together with the channel to derive or estimate the κ-entropy. Its components ρ ± κ + 1 are bounded by bounds determined by the spectral parameters of the input density matrix, and by the spectral and stochastic parameters of the channel as follows:
T r ρ κ + 1 η κ p λ κ + 1 , T r ρ κ + 1 η κ p λ κ + 1 .
In the proof, the variable η κ = i ( T r H ( i ) H ( i ) T ) κ + 1 has been introduced, where the matrices H j m ( i ) h j i m , and the circulant permutation h | n = | n + 1 , mod N , have been used along with the following lemma [61,62,63,64]:
Lemma 1.
Let the quantum random unitary channel transformation of ρ ρ = E ( ρ ) = i p i A i ρ A i , with A i be unitary. Let λ , λ be the eigenvalue vectors of the ρ and ρ density matrices, respectively, which are related by the unistochastic matrix Δ E = i p i A i A i , as λ = Δ E λ . If, via Birkhoff’s theorem, the bi-stochastic matrix Δ E is decomposed as a convex combination of circular permutations h i , then Δ E = i p i h i .

5. Summary and Outlook

The material covered in this paper provides tools and concepts from the field of quantum computation-information in order to enable a useful interaction with the theory of kappa entropy. The canonical state kappa entropy is shown to offer a framework where quantum computing would inspire operational methods for analyzing further the kappa entropy and motivate quantum mechanical measurements based on the entropy. Similarly, the quantum channel formalism and the constructive method of the Hadamard circuit that have been developed for κ entropy transformation and generation reveal an intimate relation of those techniques with entropy, and applications along those lines would be anticipated. Finally, the interrelations between the von Neumann entropy and κ entropy place the latter within the broad field of entropies, a fact that would enable the κ entropy to be applied in the fields of open quantum systems and master equations. Finally, we mention some specific research topics that the formalism put forward here could help investigate. Examples are as follows: (i) How does the κ entropy evolve in time for the density matrix of a qubit system evolving temporally by means of a time-dependent unitary evolution channel with a Kraus generator governed by a propagator determined by a coherent state path integral S U ( 2 ) S U ( 1 , 1 ) ? [65]. (ii) How can the κ entropy quantify the entanglement developed among the coin system and the walker system in the course of diffusion of an anyonic quantum walk? [66]. (iii) A similar question to (i) but now with a Kraus generator governed by the Hamiltonian of a qubit that has developed a Berry adiabatic geometric phase in the framework of sudden-adiabatic approximation [67]. Also, unlike the three previous dynamic studies, a novel kinematical study for κ entropy is suggested. How does the κ entropy of a bipartite quantum system of a composite number dimension d, e.g., d = 6 , change if the parent system decomposes naturally based on the so-called prime-decomposition, and splits into two subsystems of dimension d A = 2 (qubit) and d B = 3 (qutrit)? [64]. Finally, addressing questions related to the relativistic origin and aspects of κ entropy, [4], is an interesting, open area that would benefit from the present formalism.

Author Contributions

Conceptualization, D.E.; Methodology, D.E.; Investigation, D.E.; Writing—original draft, D.E.; Writing—review & editing, D.E, G.K., additionally, G.K., checked the compatibility of the results with the general properties of κ -entropy and the comparison of the results with other generalized entropies. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

One of us (D.E.) is grateful to A. Manousakis for discussions.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Proofs

Lemma A1
(Technical Lemma). The various powers of the density matrices that are presented throughout the paper are all derived via the spectral decomposition (canonical decomposition) of the density matrix ρ . Recall that the density matrix in finite dimension ρ D N = ρ C N × N ; ρ = ρ , ρ > 0 , T r ρ = 1 , where N [ 0 , 1 , , N 1 ] , as well as its version in the infinite dimension ρ D = ρ C N 0 × N 0 ; ρ = ρ , ρ > 0 , T r ρ = 1 . Given the solution of the eigenvalue/vector problem for ρ, its spectral decomposition follows as ρ = U Λ U = λ n N λ n U | n n | U , where ρ U = U Λ , with Λ | n = λ n | n , the eigenvalues λ n n = 0 N 1 with properties λ n [ 0 , 1 ] , n = 0 N 1 λ n = 1 ; and | u n = U | n the corresponding eigenvectors | u n n = 0 N 1 expressed in the canonical basis | n n = 0 N 1 forming an orthonormal and complete basis. Analogous properties are values for the infinite dimensional case D where the set N is substituted by N 0 . Powers of the density matrix are evaluated following the properties of the decomposition, i.e., ρ κ = U Λ κ U = λ n N λ n κ | u n u n | . A negative power would require ρ not to be singular and having all its eigenvalues non-zero, etc. [49].
Let κ [ 0 , 1 ) and let the density ρ D = ρ C N × N ; ρ = ρ , ρ > 0 , T r ρ = 1
Proof of Proposition 1.
Canonical distribution: Let ρ 1 ± κ = e 1 ± κ ln ρ , and let the canonical Gibbs state ρ c a n = 1 Z e β H , with Z = T r ( e β H ) . Then,
ln ρ c a n = ln 1 Z e β H = ( I ln Z + β H ) .
Denote β ± κ : = 1 ± κ β and Z ± κ : = Z 1 ± κ ; then similarly
ρ c a n 1 ± κ = e 1 ± κ ln ρ = e ( 1 ± κ ) ( I ln Z + β H ) = e ( 1 ± κ ) ln Z I e ( 1 ± κ ) β H = 1 Z 1 ± κ e 1 ± κ β H ,
Proceed now to a quantum measurement determination of the kappa entropy: Let ρ 1 ± κ = ρ ρ ± κ . Assume ρ = ρ c a n = 1 Z e β H , then
ρ c a n 1 ± κ = ρ c a n ρ c a n ± κ = 1 Z e β H 1 Z ± κ e ( ± κ ) β H = ρ c a n 1 Z ± κ e ( ± κ ) β H
Computing the kappa entropy for the canonical state yields
S κ ( ρ c a n ) = 1 2 κ T r ( ρ c a n 1 + κ ρ c a n 1 κ ) = 1 2 κ T r ( ρ c a n ( ρ c a n κ ρ c a n κ )
and further
S κ ( ρ c a n ) = 1 2 κ T r 1 Z e β H 1 Z κ e κ β H 1 Z κ e κ β H = 1 2 κ T r 1 Z e β H 1 Z κ cosh ( κ β H ) 1 Z κ cosh ( κ β H ) + 1 Z κ sinh ( κ β H ) + 1 Z κ sinh ( κ β H ) .
This leads to
S κ ( ρ c a n ) = 1 2 κ T r 1 Z e β H 1 Z κ 1 Z κ cosh ( κ β H ) + 1 Z κ + 1 Z κ sinh ( κ β H ) .
By means of the expression 1 2 ( 1 Z κ 1 Z κ ) = sinh ( κ ln Z ) , and 1 2 ( 1 Z κ + 1 Z κ ) = cosh ( κ ln Z ) , we compute
S κ ( ρ c a n ) = 1 κ T r ρ c a n sinh ( κ ln Z ) cosh ( κ β H ) + cosh ( κ ln Z ) sinh ( κ β H ) ,
and thanks to the identity sinh ( x ) cosh ( y ) + cosh ( x ) sinh ( y ) = sinh ( x + y ) , we obtain
S κ ( ρ c a n ) = T r ρ c a n · 1 κ sinh ( κ I ln Z + κ β H ) .
Proof of Proposition 2.
Two-temperatures simulation protocol of kappa entropy.
Always 1 ± κ > 0 , so let ( 1 ± κ ) β = ( 1 ± κ ) 1 k T = 1 k T 1 ± κ = 1 k T ± κ ,
define the effective temperatures
T c o o l T + κ = T 1 + κ < T , T h o t T κ = T 1 κ T .
So, compute the power ρ c a n 1 ± κ
ρ c a n 1 ± κ = 1 Z 1 ± κ e 1 ± κ k T H = 1 Z 1 ± κ e 1 k T ± κ H .
The kappa entropy reads
S κ ( ρ c a n ) = 1 2 κ T r ( ρ c a n 1 + κ ρ c a n 1 κ ) = 1 2 κ T r 1 Z 1 + κ e 1 k T + κ H 1 Z 1 κ e 1 k T κ H
or
S κ ( ρ c a n ) = : 1 2 κ T r Z c o o l Z 1 + κ · ρ c o o l Z h o t Z 1 κ · ρ h o t ,
where the two auxiliary canonical density matrices are
ρ c o o l = 1 Z c o o l e 1 k T c o o l H ; Z c o o l = T r e 1 k T c o o l H ρ h o t = 1 Z h o t e 1 k T h o t H ; Z h o t = T r e 1 k T h o t H .
Due to normalization T r ρ h o t = T r ρ c o o l = 1 , the following is true
S κ ( ρ c a n ) = 1 2 κ Z c o o l Z 1 + κ · T r ρ c o o l Z h o t Z 1 κ · T r ρ h o t = 1 2 κ Z c o o l Z 1 + κ Z h o t Z 1 κ = 1 2 κ Z c o o l Z 1 κ Z h o t Z 1 + κ Z 1 + κ Z 1 κ = 1 2 κ Z c o o l Z 1 κ Z h o t Z 1 + κ Z 2
or
S κ ( ρ c a n ) = 1 2 κ Z c o o l Z 1 κ Z h o t Z 1 + κ = 1 2 κ Z T 1 + κ Z 1 κ Z T 1 κ Z 1 + κ .
To emphasize, denote Z by Z T , so
S κ ( ρ c a n ) = 1 2 κ Z T 1 + κ Z T 1 κ Z T 1 κ Z T 1 + κ S κ ( ρ c a n ) = 1 2 κ Z c o o l Z 1 κ Z h o t Z 1 + κ S κ ( ρ c a n ) = 1 2 κ Z Z c o o l Z κ Z h o t Z κ .
Protocol: cooling, heating, post-processing
Example: let κ = 2 / 3 .
Compute T T c o o l = T 1 + κ = 3 5 T and evaluate Z c o o l ,
Compute T T h o t = T 1 κ = 3 T and evaluate Z h o t ,
Compute (post-processing) S κ ( ρ c a n ) in terms of Z , Z c o o l ( κ ) , Z h o t ( κ ) and κ .
These steps can be repeated for any κ .
Proof of Proposition 3.
Preliminaries:
The “double-wedge” notation: Any bipartite quantum system may be described by the state vector
ψ = i = 1 n 1 j = 1 n 2 c i j ϕ i x j ,
with i = 1 n 1 j = 1 n 2 c i j 2 = 1 . Consider the following state vector ψ of two qubits in { 0 , 1 } :
ψ = c 00 00 + c 01 01 + c 10 10 + c 11 11 = 0 1 c 00 c 01 c 10 c 11 0 1
There is an equivalent expression for bipartite systems called the “double-wedge” ket vector A . In particular, A is the matrix representation of a quantum system ψ , whose element in position ( i , j ) is the respective coefficient c i j of ψ . So, in this case:
A = A 00 A 01 A 10 A 11
Due to this description, a double-wedge ket A must also be normalized as a prerequisite of the preservation of the probability constraint, meaning that A 2 = A F = 1 , where the Euclidean vector norm equals the Frobenius matrix norm, i.e., A 2 = A A = T r ( A A ) = A F . Moreover, the following local transformation A ( V W ) A , where the V , W are unitaries, preserves normalization as well as the partial traces of the bipartite projector A A (see below).
This “double-wedge” notation exploits the isomorphism between vectors in H 1 H 2 and the rectangular n 1 · n 2 matrices, where n 1 and n 2 are the dimensions of H 1 and H 2 , respectively. Due to the isomorphism, the matrices A C n 1 × n 2 and vectors A C n 1 · n 2 , are expressed as follows:
A = i = 1 n 1 j = 1 n 2 A i j i j A = i = 1 n 1 j = 1 n 2 A i j | i | j
Similar to the vector representation, there is also a “double-wedge” bra vector:
A = ( A ) = A 00 A 01 A 10 A 11
Some notable properties of the double-wedge ket notation include the following:
( A C T ) B = A B C ( A I ) B = A B ( I C T ) B = A C A 2 = A A = T r ( A A ) = A F A B = T r ( A B ) A + B = A + B
Several notable examples of double-wedge state vectors and matrices exist such that the Bell states and Pauli matrices are presented as follows:
X = 1 2 0 1 1 0 = 1 2 ( | 01 + | 10 ) i Y = i 2 0 i i 0 = 1 2 ( | 01 | 10 )
Z = 1 2 1 0 0 1 = 1 2 ( | 00 | 11 ) I = 1 2 1 0 0 1 = 1 2 ( | 00 + | 11 ) .
Considering that a bipartite system is in a pure state, its respective density matrix ρ is equal to ψ ψ , which is equivalent to A A with reference to its double-wedge ket notation. In addition, the calculation of the reduced density matrices is performed according to the following formulas:
T r 2 ( A A ) = A A T r 1 ( A A ) = A T A
and
A B = T r ( A B ) .
Purified mixed-state kappa entropy:
Consider ρ D and the map ρ ρ . The reduced systems obtained from the bipartite system ρ by partial tracing one of each subsystem are the original single partite density matrix, i.e.,
T r 1 ρ ρ = ρ ρ = ρ T r 2 ρ ρ = ρ T ρ = ρ .
Purification is non-unique since the map ρ W ρ W of the initial mixed state by a unitary operator W leads to the same reduce density matrix, i.e., ρ = W ρ W = W ρ W
T r 1 W ρ W W ρ W = W ρ W W ρ W = W ρ W T r 2 W ρ W W ρ W = W ρ W T W ρ W = W ρ W .
Further, consider the spectral decomposition of the density matrix ρ = V D V , then ρ = V D V , and also ρ = V D V , then ρ ρ = V D V V D V .
Next, elaborate on the purification map ρ V D V to obtain
ρ = V D V = V V D = V V i = 1 s p i i i .
The 1-norm reads ρ 1 = D 1 = i = 1 s p i , where Sch ρ = s is the Schmidt number.
Next, we give various equivalent forms of expressing the density matrices of mixed quantum systems via their associated (non-unique) pure state vectors of some bipartite system:
first
ρ ρ = N ρ I 1 N I = ρ I I = I ρ T I ,
and also
ρ ρ ρ = N ρ ρ I 1 N I = ρ ρ T I = ρ ρ T I = I ρ T ρ T I = I ρ T ρ T I .
Main proof:
Utilizing the identity T r 2 ( A A ) = A A for the case
T r 2 ρ κ + 1 2 ρ κ + 1 2 = ρ κ + 1 ,
the following expression for the kappa entropy is obtained:
S κ ( ρ ) = 1 κ T r [ T r 2 ρ κ + 1 2 ρ κ + 1 2 ρ κ + 1 2 ρ κ + 1 2 = 1 κ T r 2 ρ κ + 1 2 I I I ρ κ + 1 2 I ρ κ + 1 2 I I I ρ κ + 1 2 I ] .
The S κ is expressed as
S κ ( ρ ) = T r 2 E ( I I ) ,
where for any ν C N × N C N × N , the positive semi-definite map E : C N × N C N × N C N × N C N × N is introduced,
E ( ν ) = R + ν R + R ν R
with (Kraus-like) generators
R ± = 1 κ ( ρ ± κ + 1 2 I ) .
Via the relation ρ ± κ + 1 2 = e ± κ + 1 2 ln ρ , the generators are also expressed in the form
R ± = 1 κ e ± κ + 1 2 ln ρ I = 1 κ ( ρ I ) e ± κ 2 ln ρ I .
Further, map E is also expressed in an extended space by adding an auxiliary qubit. The density matrix of the total (auxiliary+reference) system is defined on matrix space C 2 × 2 C N × N C N × N . Map E is explicitly obtained as
E ( ν ) = T r 1 U σ 3 ν U
where σ 3 is the state given to the auxiliary qubit and U is a conditional gate with generators ρ ± κ + 1 2
U = 1 κ R + 1 κ R = 1 κ ρ κ + 1 2 1 κ ρ κ + 1 2 = 1 κ ( P 0 R + + P 1 R ) ,
which reads explicitly
U = 1 κ ( I ρ I ) e κ 2 ln ρ I e κ 2 ln ρ I = 1 κ ( I ρ I ) e κ 2 σ 3 ln ρ I .
So, the channel reads
E ( ν ) = T r 1 1 κ R + 1 κ R ν ν 1 κ R + 1 κ R
Addendum
S κ = 1 2 via purification:
Utilizing this possibility of expressing a mixed state as a reduction of its associated pure states, we express the kappa entropy S κ for κ = 1 2 as
S κ = 1 2 ( ρ ) = 2 ρ 1 ρ ρ 1 = 2 ( i = 1 N p i 1 / 2 i = 1 N p i 3 / 2 ) .
This result is valid for any density matrix ρ up to a unitary transformation, i.e.,
ρ W ρ W ρ ρ = W ρ W = W ρ W = W V D V W = W V D ( W V )
then
ρ ρ = W ρ W = W V W V D ,
due the local unitary invariance of the 1-norm,
W ρ W 1 = ρ 1 ,
so, S κ = 1 2 ( W ρ W ) = S κ = 1 2 ( ρ ) .
Recall next the triangular inequality for the l 1 norm, x 1 + y 1 x + y 1 , x 1 y 1 x y 1 , and applying it to the last equation leads to an upper bound for the kappa entropy, this time determined only by the spectrum of the related density matrix; it reads
S κ = 1 2 ( ρ ) = 2 ρ 1 ρ ρ 1 2 ρ ρ ρ 1 2 i = 1 N ( p i 1 / 2 p i 3 / 2 ) .
Proof of Proposition 4.
Consider attaching to the Hilbert space C N of the reference quantum system, an auxiliary qubit so the total state space is H C 2 C N . Next, consider the initial state 0 0 ρ , where we denote with ρ the density matrix of the system for which we want to evaluate the kappa entropy. Introduce the following map on the density matrices S [ M ] : D ( H ) D ( H ) explicitly composed of maps acting on the composite system of reference+qubits,
S [ M ] A d H I A d V c M A d H I : D ( H ) D ( H ) ,
on an initial state, i.e., 0 0 ρ S [ M ] ( 0 0 ρ ) ,
0 0 ρ A d H I A d V c M A d H I 0 0 ρ
where the adjoint action A X A X has been denoted as A A d X ( A ) . Specifically, A d H I stands for the local adjoint action of the Hadamard gate H on the auxiliary qubit state, i.e., A d H I ρ 2 ρ N = H ρ 2 H ρ N . Also, A d V c M stands for the control M gate V c M = P 0 M + P 1 I , acting on the composite system with control on the qubit state and the target on the reference system.
A d V c M ( . ) = V c M ( . ) V c M = P 0 M + P 1 I ( . ) P 0 M + P 1 I
Next, let M : C N C N as the matrix introduced before and also Ω : C 2 C 2 for a matrix acting on an the auxiliary qubit space. Define map T Ω [ M ] ,
T Ω [ M ] T r 1 ( Ω I ) S [ M ] : D ( H ) D ( H ) ,
parametrized by some Ω and M . Map T describes the action taken after S [ M ] on the total auxiliary qubit+reference density matrix state, e.g., 0 0 ρ . On the resulting state S [ M ] ( 0 0 ρ ) , the mean value of the qubit space operator Ω is measured next, i.e., T r 1 ( Ω I ) S [ M ] ( 0 0 ρ .
Examples: the choice Ω = I leads to
T r 1 ( Ω I ) S [ M ] ( 0 0 ρ ) = 1 2 ( M ρ M + ρ ) ,
the choice Ω = 0 0 1 1 σ 3 leads to
T r 1 ( Ω I ) S [ M ] ( 0 0 ρ ) = 1 2 ( ρ M + M ρ ) .
Also, if M = ρ , then S [ ρ ] ( 0 0 ρ ) = ρ 2 . The choices M = ρ ± κ , then give
S [ ρ ± κ ] ( 0 0 ρ ) = ρ 1 ± κ .
In explicitly matrix form:
S [ M ] ρ . . . = 1 4 I I I I M . . I I I I I ρ . . . I I I I M . . I I I I I = 1 4 M + I M I M I M + I ρ . . . M + I M I M I M + I = 1 4 ρ M + I + M ρ M + I M ρ M + I ρ M + I ρ M I + M ρ M I M ρ M I ρ M I .
The choice Ω = σ 3 , i.e., Ω I = I I leads to
T r 1 ( Ω I ) S [ M ] ρ . . . = T r 1 1 4 I I ρ M + I + M ρ M + I M ρ M + I ρ M + I ρ M I + M ρ M I M ρ M I ρ M I = 1 4 T r 1 ρ M + I + M ρ M + I M ρ M + I ρ M + I ρ M I M ρ M I M ρ M I + ρ M I .
Which yields
T r 1 I I S [ M ] ρ . . . = 1 4 ρ M + I + M ρ M + I M ρ M I + ρ M I = 1 4 { ρ M + ρ + M ρ M + M ρ M ρ M + M ρ + ρ M ρ } = 1 2 ( ρ M + M ρ ) .
Moreover,
T r 1 ( σ 3 I N ) S [ ρ κ ] ρ . . . = ρ 1 + κ , T r 1 ( σ 3 I N ) S [ ρ κ ] ρ . . . = ρ 1 κ .
So, T ± σ 3 [ ρ ± κ ] = ± ρ 1 ± κ . Abbreviating T ± σ 3 to T ± yields the kappa entropy
T r 1 2 κ T + [ ρ κ ] + T [ ρ κ ] ( 0 0 ρ ) = T r 1 2 κ ρ 1 + κ ρ 1 κ = S κ ( ρ ) .
Proof of Proposition 5.
The following work-flow outlines the content of the proof.
S κ ( ρ ) sin hc modulation sin hc approximation S κ ( ρ ) sin h c ( ρ ) S v N ( ρ ) S κ ( ρ ) = S v N ( ρ ) lim n T r ρ ln ρ R ^ n PF - Thm what s lim n R ^ n 1 ν S κ ( ρ ) S v N ( ρ ) [ 0 , )
The proof starts with the κ entropy S κ of a density matrix ρ = U Λ U , and shows, by employing the sinhc function, that it admits two decompostions; one multiplicative, in which S κ is shown to equal a von Neumann entropy S v N , that is modulated with sinhc as the modulation kernel function, and one additive, in which S κ equals S v N plus an operator limiting function. This limiting function is shown to be computed by invoking the Perron–Frobenius theorem for positive matrices, that involve the operator ln Λ . Finally, the κ weighted difference of S κ and S v N entropies is shown to be non-negative real.
Let ρ = i = 1 N p i u i u i = U Λ U be the spectral decomposition of the density matrix. Consider the real powers of the density matrix
ρ ± κ + 1 = U Λ ± κ + 1 U = U Λ ± κ U = i p i ± κ u i u i ,
where p i ± κ : = e ± κ ln p i . Define the hyperbolic cardinal sine function [54],
s i n h c ( x ) = 1 for x = 0 sinh x x for x R 0 or x C 0
or its normalized form
s i n h c π ( x ) = 1 for x = 0 sinh π x π x for x R 0 or x C 0
where sinh x = 1 2 e x e x .
Then, (assuming p i 0 for all i’s), the spectral decomposition of the kappa entropy reads
S κ ( ρ ) = 1 2 κ T r ( ρ κ + 1 ρ κ + 1 ) = 1 2 κ T r ( ρ ( ρ κ ρ κ ) ) = 1 κ T r ( ρ ( i sinh ( κ ln p i ) u i u i ) ) = T r ( i p i ln p i sinh ( κ ln p i ) κ ln p i u i u i )
rearranging
S κ ( ρ ) = 1 κ T r ( i p i × κ ln p i s i n h c ( κ ln p i ) u i u i ) , S κ ( ρ ) = 1 κ T r i p i ln p i κ s i n h c ( ln p i κ ) u i u i .
Remark: The above Equation (A2), S κ ( ρ ) = T r S v N ( ρ ) s i n h c ( κ ln ρ κ ) shows that the κ -entropy of a density operator ρ is expressed as a self-modulation of its von Neumann entropy S v N ( ρ ) by a kernel function identified with the sinhc function with the argument depending on κ ln p i , i.e., s i n h c ( κ ln p i ) , in the eigenbasis of the ρ operator. This property provides a direct relation between S κ and S v N . Treating S κ as S v N with sinhc modulation for any discrete probability distribution function (d-pdf) would suggest another application. The κ entropy could be applied when studying the sequences of d-pdf’s p 1 p 2 p 3 p n 1 p n that violate the majorization condition ≻ ([68]), and exhibit an evolution to an equilibrium state where the entropy is allowed to decrease locally, while it increases globally in the asymptotic limit. Such a phenomenon appears in the study of the evolution of a quantum walk ([69]). The evolution of d-pdf in that case, due to the braking of majorization ordering, exhibits a modulation of an otherwise monotonic sequence of entropy values in small subsets, c.f. Figure 7, and related discussion ([69]).
Next, we proceed to introduce two approximations in the above equation for S κ ( ρ ) : first, denote by x the sequence, x = ( x i ) i ( ln p i κ ) i to cast the kappa entropy in the form
S κ ( ρ ) = 1 κ T r i p i ( x i ( s i n h c ( x i ) ) u i u i .
Recall the necessary numerical-algebraic background: The Chebyshev–Stirling numbers of both kinds are known in the literature ([55,56,57]) as the case γ = 1 2 of the Jacobi–Stirling numbers of both kinds determined by means of the recurrence relations
n k γ = n 1 k 1 γ + ( n 1 ) ( n + 2 γ 2 ) n 1 k γ
and
n k γ = n 1 k 1 γ + k ( k + 2 γ 1 ) n 1 k γ
with initial conditions
n 0 γ = n 0 γ = δ 0 n and 0 k γ = 0 k γ = δ 0 k .
We proceed with the following two approximations of the function s i n h c in terms of the Jacobi–Stirling (JS) numbers a b m .
Approximation 1: an asymptotic formula for the Chebyshev–Stirling numbers of the first kind ([56])
s i n h c π ( x ) 1 n ! 2 m = 0 n n + 1 m + 1 1 / 2 x 2 m , n x sin h c π ( x ) 1 n ! 2 m = 0 n n + 1 m + 1 1 / 2 x 2 m + 1 , n .
A second approximation of the JS numbers is valid:
Approximation 2: For m = 0 , , n , ([57])
1 n ! 2 n + 1 m + 1 1 / 2 π 2 m ( 2 m + 1 ) ! , n .
Combining approximations 1 and 2 yields the following expansion:
x sin h c π ( x ) m = 0 n π 2 m ( 2 m + 1 ) ! x 2 m + 1 , n .
Back to the kappa entropy
S κ ( ρ ) = lim n T r i = 1 N m = 0 n π κ 2 m ( 2 m + 1 ) ! p i ln p i 2 m + 1 u i u i .
which is now expressed as
S κ ( ρ ) = lim n T r m = 0 n π κ 2 m ( 2 m + 1 ) ! ρ ln ρ 2 m + 1 ,
or in terms of the vN entropy S v N ( ρ ) = T r ( ρ ln ρ ) , equivalently as
S κ ( ρ ) = S v N ( ρ ) lim n T r m = 1 n π κ 2 m ( 2 m + 1 ) ! ρ ln ρ 2 m + 1 .
We turn next to the evaluation of the bounds: first, c.f. the spectral decompositions ρ = U Λ U , Λ = i = 1 M p i | u i u i | , and
ρ ln ρ 2 m + 1 = i = 1 N p i ln p i 2 m + 1 | u i u i | = U Λ ln Λ 2 m + 1 U
and observe that since 0 < p i < 1 , then ln p i is negative real, i.e., ln p i = ln p i , so ln p i 2 m + 1 = ln p i 2 m + 1 = ln p i 2 m + 1 . Then,
S κ ( ρ ) = S v N ( ρ ) + lim n T r i = 1 N m = 1 n π κ 2 m ( 2 m + 1 ) ! p i ln p i 2 m + 1 u i u i , S κ ( ρ ) = S v N ( ρ ) + lim n T r i = 1 N p i ln p i m = 1 n 1 ( 2 m + 1 ) ! π κ ln p i 2 m u i u i .
Then, by means of the eigen-projectors P i = u i u i and the defining property P i = P i 2 = P i P j δ i j , we obtain that
S κ ( ρ ) = S v N ( ρ ) + lim n T r ρ ln ρ m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m .
Bounds: Next, bounds will be worked out for the second term in the rhs of the last equation above. First, we invoke the content of the Perron–Frobenius theorem.
The bounds to be worked out will be evaluated by means of the relations
1 ( 2 n + 1 ) ! m = 1 n π κ ln p i 2 m m = 1 n 1 ( 2 m + 1 ) ! π κ ln p i 2 m 1 ( 2 + 1 ) ! m = 1 n π κ ln p i 2 m or n ( 2 n + 1 ) ! 1 n m = 1 n π κ ln p i 2 m m = 1 n 1 ( 2 m + 1 ) ! π κ ln p i 2 m n 6 1 n m = 1 n π κ ln p i 2 m .
The special case of matrix A , by means of the previous assumptions and the properties: (i) λ max 1 is the largest of all powers of λ max and λ min ; (ii) λ min n is the smallest of all powers of λ max and λ min , yields the bounds
n ( 2 n + 1 ) ! 1 n m = 1 n λ max m π κ ln p i 2 λ max m m = 1 n 1 ( 2 m + 1 ) ! π κ ln p i 2 m
n 6 1 n m = 1 n λ max m π κ ln p i 2 λ max m or n ( 2 n + 1 ) ! λ min n 1 n m = 1 n π κ ln p i 2 λ max m m = 1 n 1 ( 2 m + 1 ) ! π κ ln p i 2 m
n 6 λ max 1 n m = 1 n π κ ln p i 2 λ max m
Next, we work out the lower and the upper limit of the concerned sum
T r ρ ln ρ lim n m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m ,
by means of PF-Thm, and complete the proof by showing that the set of values of the scaled difference of κ and the vN entropy is the interval [ 0 , ) .
Perron–Frobenius theorem: Let the positive matrix A = ( A i j ) i , j = 1 N with A i j > 0 . Let the simple eigenvalues of A ordered as λ 1 λ 2 λ N . Denote λ min = λ N , and by λ max λ 1 < 1 , the spectral radius or maximal or dominant eigenvalue, and let | u max be its associated eigenvector.
(1) Power limit: Then, it is valid that
lim n A λ max n = | u max u max | .
(2) Cesaro summation limit: With a matrix A as above, it is valid that
lim n 1 n m = 1 n A λ max m = | u max u max | .
Below the PF-Thm, the special case of the matrix A π κ ln Λ 2 , where the eigenvalues of the ρ density matrix form a discrete probability distribution, will be denoted by p i , i = 1 , , N , and also where the matrix A s maximal eigenvalue λ max will be assumed to be 0 < λ max 1 .
Lower bound: For the rhs of Equation (A5), we obtain via the inequalities of Equation (A6)
T r ρ ln ρ lim n m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m lim n n ( 2 n + 1 ) ! n ( 2 n + 1 ) ! λ min n T r ρ ln ρ lim n 1 n m = 1 n π κ ln ρ 2 λ max m R ^ .
Next, the evaluation of the operator limit R ^ lim n 1 n m = 1 n π κ ln ρ 2 λ max m will be based on the Perron-Frobenius theorem (PF-Thm) in one of its equivalent expressions that involves the so-called Cesaro summation ([49,51,70]).
Applying the PF-Thm in its Cesaro summation limit for matrix A π κ ln ρ 2 yields
T r ρ ln ρ · lim n 1 n m = 1 n π κ ln ρ 2 λ max m R = T r ( ρ ln ρ · | u max u max | ) = u max | ( Λ ln Λ ) | u max .
Then, due to the limit lim n n ( 2 n + 1 ) ! = 0 , the previous evaluation yields the lower bound
T r ρ ln ρ lim n m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m lim n n ( 2 n + 1 ) ! λ min n u max | ( Λ ln Λ ) | u max .
Upper bound: Next, recalling again Equation (A6), consider the sum
T r ρ ln ρ lim n m = 1 n 1 ( 2 m + 1 ) ! π κ ln ρ 2 m lim n n ( 2 + 1 ) ! λ max T r ρ ln ρ lim n 1 n m = 1 n π κ ln ρ 2 m λ max lim n n 6 λ max T r ( Λ ln Λ | u max u max | ) lim n n 6 λ max u max | ( Λ ln Λ ) | u max .
Referring to Equation (A4), and combining the last inequalities in Equations (A9) and (A10), we obtain that S κ ( ρ ) is bounded as
S κ ( ρ ) S v N ( ρ ) + lim n n ( 2 n + 1 ) ! λ min n u max | ( Λ ln Λ ) | u max S κ ( ρ ) S v N ( ρ ) + lim n n 6 λ max u max | ( Λ ln Λ ) | u max ) .
Elaborating on the bounds of the scaled difference between the κ and von Neumann entropies leads to
lim n n ( 2 n + 1 ) ! λ min n S κ ( ρ ) S v N ( ρ ) u max | ( Λ ln Λ ) | u max ) lim n n 6 λ max .
Finally, the asymptotic interval of values of the scaled difference of the two entropies becomes
S κ ( ρ ) S v N ( ρ ) u max | ( Λ ln Λ ) | u max ) [ lim n n ( 2 n + 1 ) ! λ min n , lim n n 6 λ max ) [ 0 , ) .
Proof of Proposition 6.
Let the trace of the transformed density matrix, and recall from the previous proposition, be λ = Δ E λ , where Δ E = i p i A i A i , then
T r ρ κ + 1 = T r Λ κ + 1 = i λ i κ + 1 = i ( Δ E λ ) i κ + 1 = i j p j ( h j λ ) i κ + 1 .
Elaborate the last sum
j p j ( h j λ ) i = j p j m h j i m λ m = j m p j h j i m λ m = j m p j H j m ( i ) λ m = p T H ( i ) λ
where H j m ( i ) h j i m , and H = i H ( i ) . Returning to the trace expression
T r ρ κ + 1 = i ( p T H ( i ) λ ) κ + 1 = i ( T r H ( i ) λ p T ) κ + 1 = i ( p H ( i ) λ ) κ + 1 = i ( T r ( H ( i ) λ p ) κ + 1
where λ = i = 0 N 1 λ i i , and p = i = 0 N 1 p i i .
Next, we look for an upper bound to the last sum above. Since
p H ( i ) λ = T r ( H ( i ) λ p ) T r H ( i ) H ( i ) T T r λ p ( λ p ) T = T r H ( i ) H ( i ) T T r λ p p λ = T r H ( i ) H ( i ) T T r λ p p λ = T r H ( i ) H ( i ) T p | p λ | λ
we find
T r ρ κ + 1 i ( T r H ( i ) H ( i ) T ) κ + 1 ( p | p λ | λ ) κ + 1 = η κ ( p | p λ | λ ) κ + 1
where η κ = i ( T r H ( i ) H ( i ) T ) κ + 1 . Summarizing
T r ρ κ + 1 η κ p λ κ + 1 T r ρ κ + 1 η κ p λ κ + 1
Comment on the result obtained. The constants are expressed in terms of the matrix norms of the h elements,
η κ = i ( T r H ( i ) H ( i ) T ) κ + 1 = Σ i H ( i ) κ + 1 η κ = Σ i H ( i ) κ + 1
Proof of Lemma.
Let ρ = U Λ U , ρ = V Λ V , the canonical decomposition of the density matrices with Λ = d i a g ( λ ) and Λ = d i a g ( λ ) . Schematically,
ρ E ρ V Λ V U Λ U Λ = d i a g ( λ ) Λ = d i a g ( λ ) λ Δ E λ = Δ E λ
Here, h = i = 0 N 1 i + 1 i , where i + 1 is compute mod N , the generator of circulant matrices h 0 = h N = I , h 1 , h 2 , , h N 1 . It follows that λ = Δ E λ , where Δ E = i p i A i A i .

References

  1. Kaniadakis, G. Non-linear kinetics underlying generalized statistics. Phys. A 2001, 296, 405–425. [Google Scholar] [CrossRef]
  2. Kaniadakis, G. Statistical mechanics in the context of special relativity. Phys. Rev. E 2002, 66, 056125. [Google Scholar] [CrossRef] [PubMed]
  3. Kaniadakis, G. Statistical mechanics in the context of special relativity II. Phys. Rev. E 2005, 72, 036108. [Google Scholar] [CrossRef] [PubMed]
  4. Kaniadakis, G. Relativistic roots of κ-entropy. Entropy 2024, 26, 406. [Google Scholar] [CrossRef]
  5. Silva, R. The relativistic statistical theory and Kaniadakis entropy: An approach through a molecular chaos hypothesis. Eur. Phys. J. B 2006, 54, 499–502. [Google Scholar] [CrossRef]
  6. Silva, R. The H-theorem in κ-statistics: Influence on the molecular chaos hypothesis. Phys. Lett. A 2006, 352, 17–20. [Google Scholar] [CrossRef]
  7. Wada, T. Thermodynamic stabilities of the generalized Boltzmann entropies. Phys. A 2004, 340, 126–130. [Google Scholar]
  8. Wada, T. Thermodynamic stability conditions for nonadditive composable entropies. Contin. Mechan. Thermod. 2004, 16, 263–267. [Google Scholar] [CrossRef]
  9. Naudts, J. Deformed exponentials and logarithms in generalized thermostatistics. Phys. A 2002, 316, 323–334. [Google Scholar] [CrossRef]
  10. Naudts, J. Continuity of a class of entropies and relative entropies. Rev. Math. Phys. 2004, 16, 809–822. [Google Scholar] [CrossRef]
  11. Scarfone, A.M.; Wada, T. Canonical partition function for anomalous systems described by the κ-entropy. Prog. Theor. Phys. Suppl. 2006, 162, 45–52. [Google Scholar] [CrossRef]
  12. Yamano, T. On the laws of thermodynamics from the escort average and on the uniqueness of statistical factors. Phys. Lett. A 2003, 308, 364–368. [Google Scholar] [CrossRef]
  13. Lucia, U. Maximum entropy generation and kappa-exponential model. Phys. A 2010, 389, 4558–4563. [Google Scholar] [CrossRef]
  14. Pistone, G. κ-exponential models from the geometrical point of view. Eur. Phys. J. B 2009, 70, 29–37. [Google Scholar] [CrossRef]
  15. Pistone, G.; Shoaib, M. Kaniadakis’s Information Geometry of Compositional Data. Entropy 2023, 25, 1107. [Google Scholar] [CrossRef]
  16. Oikonomou, T.; Bagci, G.B. A completness criterion for Kaniadakis, Abe, and two-parameter generalized statistical theories. Rep. Math. Phys. 2010, 66, 137–146. [Google Scholar] [CrossRef]
  17. Stankovic, M.S.; Marinkovic, S.D.; Rajkovic, P.M. The deformed exponential functions of two variables in the context of various statistical mechanics. Appl. Math. Comput. 2011, 218, 2439–2448. [Google Scholar] [CrossRef]
  18. Tempesta, P. Group entropies, correlation laws, and zeta functions. Phys. Rev. E 2011, 84, 021121. [Google Scholar] [CrossRef]
  19. Vigelis, R.F.; Cavalcante, C.C. On φ-Families of probability distributions. J. Theor. Probab. 2013, 26, 870–884. [Google Scholar] [CrossRef]
  20. Scarfone, A.M. Entropic Forms and Related Algebras. Entropy 2013, 15, 624–649. [Google Scholar] [CrossRef]
  21. da Costa, B.G.; Gomez, I.S.; Portesi, M. κ-Deformed quantum and classical mechanics for a system with position-dependent effective mass. J. Math. Phys. 2020, 61, 082105. [Google Scholar] [CrossRef]
  22. Biro, T.S. Kaniadakis Entropy Leads to Particle-Hole Symmetric Distribution. Entropy 2022, 24, 1217. [Google Scholar] [CrossRef] [PubMed]
  23. Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. Some Properties of Weighted Tsallis and Kaniadakis Divergences. Entropy 2022, 24, 1616. [Google Scholar] [CrossRef] [PubMed]
  24. Sfetcu, R.-C.; Sfetcu, S.-C.; Preda, V. On Tsallis and Kaniadakis Divergences. Math. Phys. An. Geom. 2022, 25, 7. [Google Scholar] [CrossRef]
  25. Wada, T.; Scarfone, A.M. On the Kaniadakis Distributions Applied in Statistical Physics and Natural Sciences. Entropy 2023, 25, 292. [Google Scholar] [CrossRef]
  26. Scarfone, A.M.; Wada, T. Multi-Additivity in Kaniadakis Entropy. Entropy 2024, 26, 77. [Google Scholar] [CrossRef]
  27. Chung1, W.S.; Hassanabadi, H. Investigation of Some Quantum Mechanics Problemswith κ-Translation Symmetry. Int. J. Theor. Phys. 2022, 61, 110. [Google Scholar] [CrossRef]
  28. Santos, F.F.; Boschi-Filho, H. Black branes in asymptotically Lifshitz spacetimes with arbitrary exponents in κ-Horndeski gravity. Phys. Rev. D 2024, 109, 064035. [Google Scholar] [CrossRef]
  29. Pereira, F.I.M.; Silva, R.; Alcaniz, J.S. Non-gaussian statistics and the relativistic nuclear equation of state. Nucl. Phys. A 2009, 828, 136–148. [Google Scholar] [CrossRef]
  30. Santos, A.P.; Silva, R.; Alcaniz, J.S.; Anselmo, D.H.A.L. Kaniadakis statistics and the quantum H-theorem. Phys. Lett. A 2011, 375, 352–355. [Google Scholar] [CrossRef]
  31. Santos, A.P.; Silva, R.; Alcaniz, J.S.; Anselmo, D.H.A.L. Generalized quantum entropies. Phys. Lett. A 2011, 375, 3119–3123. [Google Scholar] [CrossRef]
  32. Santos, A.P.; Silva, R.; Alcaniz, J.S.; Anselmo, D.H.A.L. Non-Gaussian effects on quantum entropies. Phys. A 2012, 391, 2182–2192. [Google Scholar] [CrossRef]
  33. Abreu, E.M.C.; Neto, J.A.; Barboza, E.M.; Nunes, R.C. Jeans instability criterion from the viewpoint of Kaniadakis statistics. EPL 2016, 114, 55001. [Google Scholar] [CrossRef]
  34. Abreu, E.M.C.; Neto, J.A.; Barboza, E.M.; Nunes, R.C. Tsallis and Kaniadakis statistics from the viewpoint of entropic gravity formalism. Int. J. Mod. Phys. 2017, 32, 1750028. [Google Scholar] [CrossRef]
  35. Chen, H.; Zhang, S.X.; Liu, S.Q. Jeans gravitational instability with kappa-deformed Kaniadakis distribution. Chin. Phys. Lett. 2017, 34, 075101. [Google Scholar] [CrossRef]
  36. Abreu, E.M.C.; Neto, J.A.; Mendes, A.C.R.; Bonilla, A. Tsallis and Kaniadakis statistics from a point of view of the holographic equipartition law. EPL 2018, 121, 45002. [Google Scholar] [CrossRef]
  37. Abreu, E.M.C.; Neto, J.A.; Mendes, A.C.R.; Bonilla, A.; de Paula, R.M. Cosmological considerations in Kaniadakis statistics. EPL 2018, 124, 30003. [Google Scholar] [CrossRef]
  38. Abreu, E.M.C.; Neto, J.A.; Mendes, A.C.R.; de Paula, R.M. Loop quantum gravity Immirzi parameter and the Kaniadakis statistics. Chaos Sol. Fractals 2019, 118, 307–310. [Google Scholar] [CrossRef]
  39. Yang, W.; Xiong, Y.; Chen, H.; Liu, S. Jeans instability of dark-baryonic matter model in the context of Kaniadakis’ statistic distribution. J. Taibah Univ. Sci. 2022, 16, 337–343. [Google Scholar] [CrossRef]
  40. He, K.-R. Jeans analysis with κ-deformed Kaniadakis distribution in f (R) gravity. Phys. Scr. 2022, 97, 025601. [Google Scholar] [CrossRef]
  41. Moradpour, H.; Javaherian, M.; Namvar, E.; Ziaie, A.H. Gamow Temperature in Tsallis and Kaniadakis Statistics. Entropy 2022, 24, 797. [Google Scholar] [CrossRef] [PubMed]
  42. Luciano, G.G. Modified Friedmann equations from Kaniadakis entropy and cosmological implications on baryogenesis and 7Li -abundance. Eur. Phys. J. C 2022, 82, 314. [Google Scholar] [CrossRef]
  43. Luciano, G.G.; Saridakis, E.N. P-v criticalities, phase transitions and geometrothermodynamics of charged AdS black holes from Kaniadakis statistics. J. High Energy Phys. 2023, 2023, 114. [Google Scholar] [CrossRef]
  44. Lambiase, G.; Luciano, G.G.; Sheykhi, A. Slow-roll inflation and growth of perturbations in Kaniadakis modification of Friedmann cosmology. Eur. Phys. J. C 2023, 83, 936. [Google Scholar] [CrossRef]
  45. Sheykhi, A. Corrections to Friedmann equations inspired by Kaniadakis entropy. Phys. Lett. 2024, 850, 138495. [Google Scholar] [CrossRef]
  46. Wilde, M.M. Quantum Information Theory; Cambridge University Press: Cambridge, UK, 2017; Available online: https://www.markwilde.com (accessed on 2 April 2025).
  47. Wang, Y.X.; Mu, L.; Vedral, V.; Fan, H. Entanglement Renyi alpha entropy. Phys. Rev. A 2016, 93, 022324. [Google Scholar] [CrossRef]
  48. Cui, J.; Gu, M.; Kwek, L.C.; Santos, M.F.; Vedral, H.F. Quantum phases with differing computational power. Nat. Commun. 2012, 3, 812. [Google Scholar] [CrossRef]
  49. Horn, R.A.; Johnson, C.R. Matrix Analysis, 2nd ed.; Cambridge University Press: Cambridge, UK, 2013; Chapter 8. [Google Scholar]
  50. Nielsen, M.A.; Chuang, I.L. Quantum Computation and Quantum Information; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  51. Meyer, C.D. Matrix Analysis and Applied Linear Algebra; SIAM: Philadelphia, PA, USA, 2000; Chapter 8. [Google Scholar]
  52. Kraus, K. States, Effects, and Operations: Fundamental Notions of Quantum Theory; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 1983; Volume 190. [Google Scholar]
  53. Aharonov, D.; Jones, V.; Landau, V. A Polynomial Quantum Algorithm for Approximating the Jones Polynomial. Algorithmica 2009, 55, 395. [Google Scholar] [CrossRef]
  54. Weisstein, E. Sinhc Function—From MathWorld, A Wolfram Web Resource. Available online: http://mathworld.wolfram.com/SinhcFunction.html (accessed on 16 April 2025).
  55. Sa’nchez-Reyes, J. The Hyperbolic Sine Cardinal and the Catenary. College Math. J. 2012, 43, 285–290. [Google Scholar] [CrossRef]
  56. Merca, M. Asymptotics of the Chebyshev–Stirling numbers of the first kind. Integral Trans. Spec. Fun. 2015. [Google Scholar] [CrossRef]
  57. Merca, M. The cardinal sine function and the Chebyshev–Stirling numbers. J. Number Theory 2016, 160, 19–31. [Google Scholar] [CrossRef]
  58. Bosyk, G.M.; Zozor, S.; Holik, F.; Portesi, M.; Lamberti, P.W. A family of generalized quantum entropies: Definition and properties. Quantum Inf. Process 2016, 15, 3393. [Google Scholar] [CrossRef]
  59. Bosyk, G.M.; Zozor, S.; Holik, F.; Portesi, M.; Lamberti, P.W. Comment on Quantum Kaniadakis entropy under projective measurement. Phys. Rev. E 2016, 94, 026103. [Google Scholar] [CrossRef]
  60. Ourabah, K.; Hamici-Bendimerad, A.H.; Tribeche, M. Quantum entanglement and Kaniadakis entropy. Phys. Scr. 2015, 90, 045101. [Google Scholar] [CrossRef]
  61. Bhatia, R. Matrix Analysis; Springer: New York, NY, USA, 1997. [Google Scholar]
  62. Alberti, P.M.; Uhlmann, A. Stochasticity and Partial Order: Double Stochastic Map and Unitary Mixing; Dordrecht: Boston, MA, USA, 1982. [Google Scholar]
  63. Nielsen, M.A. An Introduction to Majorization and Its Applications to Quantum Mechanics. Available online: http://michaelnielsen.org/papers/maj-book-notes.pdf (accessed on 16 April 2025).
  64. Ellinas, D.; Floratos, E.G. Prime factorization and correlation measure for finite quantum systems. J. Phys. A Math. Gen. 1999, 32, L63–L69. [Google Scholar] [CrossRef]
  65. Ellinas, D. SL (2,C) multilevel dynamics. Phys. Rev. A 1992, 45, 1822–1828. [Google Scholar] [CrossRef]
  66. Brennen, G.K.; Ellinas, D.; Kendon, V.; Pachos, J.K.; Tsohantjis, I.; Wang, Z. Anyonic Quantum Walks. Ann. Phys. 2010, 325, 664–681. [Google Scholar] [CrossRef]
  67. Barnett, M.; Ellinas, D.; Dupertuis, M.A. Berry’s phase in coherent excitation of atoms. J. Mod. Opt. 1988, 35, 565–574. [Google Scholar] [CrossRef]
  68. Marshall, A.W.; Olkin, I. Inequalities: Theory of Majorization and Its Applications; Academic: New York, NY, USA, 1979. [Google Scholar]
  69. Bracken, A.J.; Ellinas, D.; Tsojantjis, I. Pseudo memory effects, majorization and entropy in quantum random walks. J. Phys. A Math. Gen. 2004, 37, L91–L97. [Google Scholar] [CrossRef]
  70. Langville, A.N.; Meyer, C.D. Google’s PageRank and Beyond: The Science of Search Engine Rankings; Princeton University Press: Princeton, NJ, USA, 2006. [Google Scholar]
Figure 1. Quantum circuit of the map T Ω [ M ] for generating the κ entropy of a density matrix state. The broken-line box represents the gate S [ M ] , and the initial state is 0 0 ρ .
Figure 1. Quantum circuit of the map T Ω [ M ] for generating the κ entropy of a density matrix state. The broken-line box represents the gate S [ M ] , and the initial state is 0 0 ρ .
Entropy 27 00482 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ellinas, D.; Kaniadakis, G. Quantum κ-Entropy: A Quantum Computational Approach. Entropy 2025, 27, 482. https://doi.org/10.3390/e27050482

AMA Style

Ellinas D, Kaniadakis G. Quantum κ-Entropy: A Quantum Computational Approach. Entropy. 2025; 27(5):482. https://doi.org/10.3390/e27050482

Chicago/Turabian Style

Ellinas, Demosthenes, and Giorgio Kaniadakis. 2025. "Quantum κ-Entropy: A Quantum Computational Approach" Entropy 27, no. 5: 482. https://doi.org/10.3390/e27050482

APA Style

Ellinas, D., & Kaniadakis, G. (2025). Quantum κ-Entropy: A Quantum Computational Approach. Entropy, 27(5), 482. https://doi.org/10.3390/e27050482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop