Next Article in Journal
Information Geometry of Randomized Quantum State Tomography
Previous Article in Journal
Quantum Quantifiers for an Atom System Interacting with a Quantum Field Based on Pseudoharmonic Oscillator States
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Estimations for Shannon and Zipf–Mandelbrot Entropies

by
Muhammad Adil Khan
1,2,
Zaid Mohammad Al-sahwi
3 and
Yu-Ming Chu
4,*
1
College of Science, Hunan City University, Yiyang 413000, China
2
Department of Mathematics, University of Peshawar, Peshawar 25000, Pakistan
3
Department of Mathematics, University of Sa’adah, Sa’adah 1872, Yemen
4
Department of Mathematics, Huzhou University, Huzhou 313000, China
*
Author to whom correspondence should be addressed.
Entropy 2018, 20(8), 608; https://doi.org/10.3390/e20080608
Submission received: 9 July 2018 / Revised: 8 August 2018 / Accepted: 14 August 2018 / Published: 16 August 2018

Abstract

:
The main purpose of this paper is to find new estimations for the Shannon and Zipf–Mandelbrot entropies. We apply some refinements of the Jensen inequality to obtain different bounds for these entropies. Initially, we use a precise convex function in the refinement of the Jensen inequality and then tamper the weight and domain of the function to obtain general bounds for the Shannon entropy (SE). As particular cases of these general bounds, we derive some bounds for the Shannon entropy (SE) which are, in fact, the applications of some other well-known refinements of the Jensen inequality. Finally, we derive different estimations for the Zipf–Mandelbrot entropy (ZME) by using the new bounds of the Shannon entropy for the Zipf–Mandelbrot law (ZML). We also discuss particular cases and the bounds related to two different parametrics of the Zipf–Mandelbrot entropy. At the end of the paper we give some applications in linguistics.

1. Introduction

The idea of the Shannon entropy [1] plays a key role in information theory, while in some cases, it is denoted as measure of uncertainty. There are basically two methods for understanding the Shannon entropy. Under one point of view, the Shannon entropy quantifies the amount of information in regard to the value of X (after measurement). Under another point of view, the Shannon entropy tells us the amount of uncertainty about the variable of X before we learn its value (before measurement) [2]. The random variable, entropy, is characterized regarding its probability distribution and it can appear as a better measure of predictability or uncertainty. SE permits the appraisal of the normal least number of bits expected to encode a series of symbols based on the letters in order of estimation and the recurrence of the symbols. The formula for SE is given by [1]
S ( Ψ ) = i = 1 n ψ i log 1 ψ i ,
where ψ 1 , ψ 2 , , ψ n + with i = 1 n ψ i = 1 .
There are many applications of the Shannon entropy in most applied sciences and in other sciences, such as biology [3], genomic geography [4], and finance [5]. Currently, the Shannon entropy is applied in the simulation of laser dynamics and as an objective measure to evaluate models and compare observational results [6,7].
In 1932, George Zipf gave the idea that the size of the r th largest occurrence of an event is inversely proportional to its rank. That is, this law states that P r = 1 / r b , where P r is the frequency of occurrence of the r th ranked and b is close to unity. As in linguistics, Zipf found that one can calculate the number of times each word appears in the text. Therefore, if the rank ( γ ) of the word is in accordance with the frequency of the word’s appearance ( ρ ), then the product of these two numbers is a constant ( ( C ) : C = ρ . γ , see [8,9]).
There are several applications of the Zipf law, and here we present some of them. This law has been used in city populations. Kristian Giesen and Jens Suedekum conducted a study on the city measure distributions of single German districts’ reliance on researching the phenomenon [10]. They built their study based on the intuition by Gabaix (1999) which states that the Zipf law takes after an irregular development process. This means that Gabaix displays that if the districts follow the Gibrat law, they should notice the Zipf law at both the districts at a national level. By utilizing non-parametric procudures, they found that the Gibrat law holds in each German district, regardless of how “districts” are defined. To put it differently, the Gibrat and Zipf laws are inclined to hold ubiquitously in space. In geology, the Zipf law has been used with temperate prosperity in the resource estimation of extracting ores from the ground and petroleum [11]. In principle, it forecasts how many entities of a confident size can be left in a sequence of decreasing size, assuming the largest has been established. The solar flare intensity (M. E. J. Newman, 2004) [12] represents the cumulative distribution of the vertex gamma-ray density of solar flares, for which perceptions were made between 1980 and 1989 by the well-known HardX-Ray fulmination spectrometer onboard the solar maximum mission satellite launched in 1980. The spectrometer uses a CsI gleaming discloser to measure gamma-rays from solar flares. For website traffic (Shane Parkins, 2015) [13], the Zipf law seems, by all accounts, to be the control as opposed to the special case. It is available at the level of routers that transmit data from one geographic location to another and in the content of the World Wide Web. At the social and economic levels, it also determines how persons choose the sites they visit and form peer-to-peer societies. The omnipresent nature of the Zipf law in cyberspace is geared toward deeper empathy with the internet phenomena, for example, discovering the potential of prevalence proxy caches in divergent Autonomous Systems (ASes) with the purpose of reducing the costs incurred by internet service providers and pacification of the load on the internet backbone [14].
It was determined that Zipf’s law can describe the size and rank distribution of earthquakes, including those with magnitude, but it cannot predict when they will occur. In the earth-moon the crater size-frequency distribution can be represented by the Zipf law [12,15].
In 1966, Benoit Mandelbrot gave an enhancement for the Zipf law, known as ZML, which gives a generalization of the account of the low-rank words in corpus [16]:
g ( i ) = c ( i + h ) r ,
where i < 1000 , r , c > 0 and if h = 0 , we get the Zipf law.
If n N , h 0 , r > 0 and i { 1 , 2 , , n } , then the Zipf–Mandelbrot law (probability mass function) is defined by
G ( i , n , h , r ) = 1 / ( i + h ) r Q n , h , r .
The formula for ZME is given by
Z ( Q , h , r ) = r Q n , h , r i = 1 n log ( i + h ) ( i + h ) r + log Q n , h , r ,
where Q n , h , r = i = 1 n 1 ( i + h ) r .
There are many applications of ZML which can be found in linguistics [16,17], ecological field studies [18], and information sciences [9]. Recently, the Zipf–Mandelbrot law was applied to various types of f-divergences and distances, for example Kullback–Leibler divergence, Bhattacharyya distance (via coefficient), Hellinger distance, x 2 -divergence, etc [19].
To complete this section, we give some notions and results from ref. [20].
Let g : G R be a convex function defined on the convex set G , T n = { 1 , 2 , 3 , , n } . Let s be fixed positive integer and l be all those positive integers, such that 1 l s n . Suppose M 1 s , M 2 s , , M l s represents any subsets of T n , such that M 1 s M 2 s M l s = T n and M i s M j s = for i j , where i , j T n , x i G , ψ i > 0 with i = 1 n ψ i = 1 , and for any M { 1 , 2 , , n } , M , we represent Ψ M = : i M ψ i and Ω M = : i M ω i for positive real numbers ψ i , ω i . For the convex function g and the n-tuples x = ( x 1 , x 2 , , x n ) , ψ = ( ψ 1 , ψ 2 , , ψ n ) , the following functional is defined:
A s = max M 1 s , M 2 s , , M l s Ψ M 1 s g ( 1 Ψ M 1 s i M 1 s ψ i x i ) + Ψ M 2 s g ( 1 Ψ M 2 s i M 2 s ψ i x i ) + + Ψ M l s g ( 1 Ψ M l s i M l s ψ i x i ) .
Particularly, for s = 2 , 3 , we have
A 2 = max M 1 2 , M 2 2 Ψ M 1 2 g ( 1 Ψ M 1 2 i M 1 2 ψ i x i ) + Ψ M 2 2 g ( 1 Ψ M 2 2 i M 2 2 ψ i x i ) A 3 = max M 1 3 , M 2 3 , M 3 3 Ψ M 1 3 g ( 1 Ψ M 1 3 i M 1 3 ψ i x i ) + Ψ M 2 3 g ( 1 Ψ M 2 3 i M 2 3 ψ i x i ) + Ψ M 3 3 g ( 1 Ψ M 3 3 i M 3 3 ψ i x i ) .
Analogously, for other particular values of s with 1 l s n , one can obtain different functionals.
The following generalized refinements of the Jensen inequality were given in refs. [20,21].
Theorem 1
([20]). Let g : M R be a convex function defined on the convex set M . If x i M and ψ i > 0 for i , k T n with i = 1 n ψ i = 1 , then we have
i = 1 n ψ i g ( x i ) A k A k 1 A 3 A 2 g i = 1 n ψ i x i .
Theorem 2
([21]). Let g : M R be a convex function defined on the convex set M , x j M , and ψ j > 0 for j { 1 , , n } with j = 1 n ψ j = 1 . Then,
g j = 1 n ψ j x j min k T n ( 1 ψ k ) g j = 1 n ψ j x j ψ k x k 1 ψ k + ψ k g ( x k ) 1 n k = 1 n ( 1 ψ k ) g j = 1 n ψ j x j ψ k x k 1 ψ k + k = 1 n ψ k g ( x k ) max k T n ( 1 ψ k ) g j = 1 n ψ j x j ψ k x k 1 ψ k + ψ k g ( x k ) j = 1 n ψ j g ( x j ) .
For some other results related to the Jensen inequality and the Shannon and Zipf–Mandelbrot entropies, see refs. [8,22,23,24].
Due to the great importance of the Shannon and Zipf–Mandelbrot entropies, many results are devoted to these entropies in the literature. The main focus of this paper was to associate some refinements of the Jensen inequality to the Shannon and Zipf–Mandelbrot entropies. In this paper, we use the main results given in ref. [20] and obtain some estimations for these entropies. We also discuss some particular cases of these results. At the end of the paper, we give some applications in linguistics. The idea of this paper can be applied for other results of the Jensen inequality to obtain new estimations for these entropies.

2. Estimations for the Shannon Entropy

We start by giving our first main result for the Shannon entropy.
Theorem 3.
Let ψ i , ω i R + , where i = 1 , 2 , , n , with i = 1 n ψ i = 1 . Then, the following inequalities hold:
S ( Ψ ) i = 1 n ψ i log ω i max M 1 k , M 2 k , , M l k Ψ M 1 k log Ψ M 1 k Ω M 1 k + + Ψ M l k log Ψ M l k Ω M l k max M 1 k , M 2 k , , M l 1 k Ψ M 1 k log Ψ M 1 k Ω M 1 k + + Ψ M l 1 k log Ψ M l 1 k Ω M l 1 k log 1 Ω T n .
Proof. 
If we take g ( x ) = log x , x i = ω i ψ i , i T n in (4), then we obtain
i = 1 n ψ i g ( x i ) = i = 1 n ψ i log ω i ψ i = i = 1 n ψ i log ψ i i = 1 n ψ i log ω i = S ( Ψ ) i = 1 n ψ i log ω i
and
A k = max M 1 k , M 2 k , , M l k Ψ M 1 k log Ψ M 1 k Ω M 1 k + + Ψ M l k log Ψ M l k Ω M l k . A k 1 = max M 1 k , M 2 k , , M l 1 k Ψ M 1 k log Ψ M 1 k Ω M 1 k + + Ψ M l 1 k log Ψ M l 1 k Ω M l 1 k A 3 = max M 1 3 , M 2 3 , M 3 3 Ψ M 1 3 log Ψ M 1 3 Ω M 1 3 + Ψ M 2 3 log Ψ M 2 3 Ω M 2 3 + Ψ M 3 3 log Ψ M 3 3 Ω M 3 3 . A 2 = max M 1 2 , M 2 2 Ψ M 1 2 log Ψ M 1 2 Ω M 1 2 + Ψ M 2 2 log Ψ M 2 2 Ω M 2 2 .
Therefore, from (4), we deduce (6). ☐
Corollary 1.
Let ψ i , ω i R + , where i = 1 , 2 , , n , with i = 1 n ψ i = 1 , then
S ( Ψ ) i = 1 n ψ i log ω i max M 1 2 , M 2 2 Ψ M 1 2 log Ψ M 1 2 Ω M 1 2 + Ψ M 2 2 log Ψ M 2 2 Ω M 2 2 log 1 Ω T n .
Proof. 
By taking k = 2 in (6), we obtain (7). ☐
Corollary 2.
Let ψ i , ω i R + , i T n with i = 1 n ψ i = 1 , then
S ( Ψ ) i = 1 n ψ i log ω i max k T n ψ k log ψ k ω k + ( 1 ψ k ) log 1 ψ k i = 1 n ω i ω k log 1 Ω T n .
Proof. 
If we take M 1 2 = { k } in (7), then obviously, M 2 2 = { 1 , 2 , , n } \ { k } and Ψ M 1 2 = ψ k , Ψ M 2 2 = 1 ψ k . Hence, from (7), we deduce (8). ☐
Remark 1.
Note that Corollary 1 is, in fact, the application of ([25], Theorem 1) and Corollary 2 is the application of ([21], Theorem 1).
In the following corollary, we discuss another particular case of Theorem 3.
Corollary 3.
Let ψ i , ω i R + for i T n with i = 1 n ψ i = 1 , then
S ( Ψ ) Ψ M 1 2 log Ψ M 1 2 n + Ψ M 2 2 log Ψ M 2 2 n 0 .
Proof. 
By taking ω i = 1 in (7), i = 1 , 2 , , n , we get (9). ☐
Remark 2.
It is obvious that
max M 1 2 , M 2 2 Ψ M 1 2 log Ψ M 1 2 Ω M 1 2 + Ψ M 2 2 log Ψ M 2 2 Ω M 2 2 max k T n ψ k log ψ k ω k + ( 1 ψ k ) log 1 ψ k i = 1 n ω i ω k .

3. Estimations for the Zipf–Mandelbrot Entropy

In the following main result, we obtain some general estimations for the Zipf–Mandelbrot entropy.
Theorem 4.
Let n N , h 0 , r > 0 , ω i > 0 , i T n , then
Z ( Q , h , r ) i = 1 n log ω i ( i + h ) r Q n , h , r max M 1 k , M 2 k , , M l k { i M 1 k 1 ( i + h ) r Q n , h , r log i M 1 k 1 ( i + h ) r Q n , h , r i M 1 k ω i + + i M l k 1 ( i + h ) r Q n , h , r log i M l k 1 ( i + h ) r Q n , h , r i M l k ω i } max M 1 k , M 2 k , , M l 1 k { i M 1 k 1 ( i + h ) r Q n , h , r log i M 1 k 1 ( i + h ) r Q n , h , r i M 1 k ω i + + i M l 1 k 1 ( i + h ) r Q n , h , r log i M l 1 k 1 ( i + h ) r Q n , h , r i M l 1 k ω i } log 1 Ω T n .
Proof. 
If we substitute ψ i with 1 ( i + h ) r Q n , h , r , ( i T n ) , we have
i = 1 n ψ i g ( x i ) = i = 1 n ψ i log ω i + i = 1 n ψ i log ψ i = i = 1 n 1 ( i + h ) r Q n , h , r log 1 ( i + h ) r Q n , h , r i = 1 n log ω i ( i + h ) r Q n , h , r = i = 1 n log [ ( i + h ) r Q n , h , r ] ( i + h ) r Q n , h , r i = 1 n log ω i ( i + h ) r Q n , h , r = i = 1 n r log ( i + h ) ( i + h ) r Q n , h , r i = 1 n log Q n , h , r ( i + h ) r Q n , h , r i = 1 n log ω i ( i + h ) r Q n , h , r = r Q n , h , r i = 1 n log ( i + h ) ( i + h ) r log Q n , h , r Q n , h , r i = 1 n 1 ( i + h ) r i = 1 n log ω i ( i + h ) r Q n , h , r .
Then,
i = 1 n ψ i g ( x i ) = Z ( Q , h , r ) i = 1 n log ω i ( i + h ) r Q n , h , r ,
where Q n , h , r = i = 1 n 1 ( i + h ) r and i = 1 n 1 ( i + h ) r Q n , h , r = 1 .
Now, by applying Theorem 3 for ψ i = 1 ( i + h ) r Q n , h , r , we obtain the required result. ☐
Corollary 4.
Let n N , h 0 , r > 0 , ω i > 0 for i T n , then
Z ( Q , h , r ) i = 1 n log ω i ( i + h ) r Q n , h , r max M 1 2 , M 2 2 { i M 1 2 1 ( i + h ) r Q n , h , r log i M 1 2 1 ( i + h ) r Q n , h , r i M 1 2 ω i + i M 2 2 1 ( i + h ) r Q n , h , r log i M 2 2 1 ( i + h ) r Q n , h , r i M 2 2 ω i } log 1 Ω T n .
Proof. 
By taking k = 2 in (10), we obtain (11). ☐
We can use Theorem 4 to obtain the following corollary.
Corollary 5.
Let n N , h 0 , r > 0 , then
Z ( Q , h , r ) max M 1 2 , M 2 2 { i M 1 2 1 ( i + h ) r Q n , h , r log i M 1 2 1 ( i + h ) r Q n , h , r i M 1 2 1 + i M 2 2 1 ( i + h ) r Q n , h , r log i M 2 2 1 ( i + h ) r Q n , h , r i M 2 2 1 } log 1 n .
Proof. 
By taking ω i = 1 in (11), i = 1 , 2 , , n , we get (12). ☐
Corollary 6.
Let n N , h 0 , r > 0 , then
Z ( Q , h , r ) i = 1 n log ω i ( i + h ) r Q n , h , r max k T n { 1 ( k + h ) r Q n , h , r log 1 ( k + h ) r Q n , h , r ω k + ( k + h ) r Q n , h , r 1 ( k + h ) r Q n , h , r log ( ( k + h ) r Q n , h , r 1 ) / ( k + h ) r Q n , h , r Ω T n ω k } log 1 n .
Proof. 
Using M 1 2 = { k } in (12), we obtain Corollary 6. ☐
Remark 3.
Note that Corollary 4 is in fact the application of ([25], Theorem 1), and Corollary 6 is the application of ([21], Theorem 1).
Remark 4.
By using Remark 2, we also have
max M 1 2 , M 2 2 i M 1 2 1 ( i + h ) r Q n , h , r log i M 1 2 1 ( i + h ) r Q n , h , r i M 1 2 ω i + i M 2 2 1 ( i + h ) r Q n , h , r log i M 2 2 1 ( i + h ) r Q n , h , r i M 2 2 ω i max k T n { 1 ( k + h ) r Q n , h , r log 1 ( k + h ) r Q n , h , r ω k + ( k + h ) r Q n , h , r 1 ( k + h ) r Q n , h , r log ( ( k + h ) r Q n , h , r 1 ) / ( k + h ) r Q n , h , r Ω T n ω k } .
In the following result, we obtain the estimation for the Zipf–Mandelbrot entropy concerning two different parameters.
Theorem 5.
Let u , v 0 , r 1 , r 2 > 0 , then
Z ( Q , u , r 1 ) + i = 1 n log ( ( i + v ) r 2 Q n , v , r 2 ) ( i + u ) r 1 Q n , u , r 1 max M 1 k , M 2 k , , M l k { i M 1 k 1 ( i + u ) r 1 Q n , u , r 1 log i M 1 k 1 ( i + u ) r 1 Q n , u , r 1 i M 1 k 1 ( i + v ) r 2 Q n , v , r 2 + + i M l k 1 ( i + u ) r 1 Q n , u , r 1 log i M l k 1 ( i + u ) r 1 Q n , u , r 1 i M l k 1 ( i + v ) r 2 Q n , v , r 2 } max M 1 k , M 2 k , , M l 1 k { i M 1 k 1 ( i + u ) r 1 Q n , u , r 1 log i M 1 k 1 ( i + u ) r 1 Q n , u , r 1 i M 1 k 1 ( i + v ) r 2 Q n , v , r 2 + + i M l 1 k 1 ( i + u ) r 1 Q n , u , r 1 log i M l 1 k 1 ( i + u ) r 1 Q n , u , r 1 i M l 1 k 1 ( i + v ) r 2 Q n , v , r 2 } 0 .
Proof. 
Let ψ i = 1 ( i + u ) r 1 Q n , u , r 1 , ω i = 1 ( i + v ) r 2 Q n , v , r 2 , i T n . Then, using the proof of Theorem 4, we get
i = 1 n ψ i log ψ i = i = 1 n 1 ( i + u ) r 1 Q n , u , r 1 log 1 ( i + u ) r 1 Q n , u , r 1 = Z ( Q , u , r 1 ) , i = 1 n ψ i log ω i = i = 1 n 1 ( i + u ) r 1 Q n , u , r 1 log 1 ( i + v ) r 2 Q n , v , r 2 = i = 1 n log ( ( i + v ) r 2 Q n , v , r 2 ) ( i + u ) r 1 Q n , u , r 1 , i = 1 n ω i = i = 1 n 1 ( i + v ) r 2 Q n , v , r 2 = 1 .
Therefore, using (6) for ψ i = 1 ( i + u ) r 1 Q n , u , r 1 and ω i = 1 ( i + v ) r 2 Q n , v , r 2 , i T n , we obtain (13). ☐
Corollary 7.
Let n N , u , v 0 , r 1 , r 2 > 0 , then
Z ( Q , u , r 1 ) + i = 1 n log ( ( i + v ) r 2 Q n , v , r 2 ) ( i + u ) r 1 Q n , u , r 1 max M 1 2 , M 2 2 { i M 1 2 1 ( i + u ) r 1 Q n , u , r 1 log i M 1 2 1 ( i + u ) r 1 Q n , u , r 1 i M 1 2 1 ( i + v ) r 2 Q n , v , r 2 + i M 2 2 1 ( i + u ) r Q n , u , r 1 log i M 2 2 1 ( i + u ) r 1 Q n , u , r 1 i M 2 2 1 ( i + v ) r 2 Q n , v , r 2 } 0 .
Proof. 
By taking k = 2 in (13), we obtain (14). ☐
Corollary 8.
Let u , v 0 , r 1 , r 2 > 0 , then
Z ( Q , u , r 1 ) + i = 1 n log ( ( i + v ) r 2 Q n , v , r 2 ) ( i + u ) r 1 Q n , u , r 1 max k T n { 1 ( k + u ) r 1 Q n , u , r 1 log ( k + v ) r 2 Q n , v , r 2 ( k + u ) r 1 Q n , u , r 1 + ( k + u ) r 1 Q n , u , r 1 1 ( k + u ) r 1 Q n , u , r 1 log ( ( k + u ) r 1 Q n , u , r 1 1 ) / ( k + u ) r 1 Q n , u , r 1 ( ( k + v ) r 2 Q n , v , r 2 1 ) / ( k + v ) r 2 Q n , v , r 2 } 0 .
Proof. 
Using M 1 2 = { k } in (14), we obtain (15). ☐
Now we give applications of the above results in linguistics.
In ref. [26], Gelbukh and Sidorov observed the difference between the coefficients r 1 and r 2 in the Zipf law for the English and Russian languages. They processed 39 literature texts for each language, chosen randomly from different genres, with the requirement that the size be greater than 10,000 running words each. They calculated the coefficients for each of the mentioned texts and as a result, they obtained an average of r 1 = 0.973863 for the English language and r 2 = 0.892869 for the Russian language.
In the following results, we give the application of inequality (11) for the English language.
Application 1.
Let n N , ω i > 0 for i T n Then, we have
Z ( Q , 0 , 0.973863 ) i = 1 n log ω i i 0.973863 Q n , 0 , 0.973863 max M 1 2 , M 2 2 { i M 1 2 1 i 0.973863 Q n , 0 , 0.973863 log i M 1 2 1 i 0.973863 Q n , 0 , 0.973863 i M 1 2 ω i + i M 2 2 1 i 0.973863 Q n , 0 , 0.973863 log i M 2 2 1 i 0.973863 Q n , 0 , 0.973863 i M 2 2 ω i } log 1 Ω T n .
Similarly, we can give an application for the Russian language.
Now we give an application for the result of the related two parameters: r 1 = 0.973863 for the English language and r 2 = 0.892869 for the Russian language, which is in fact application of the inequality (14).
Application 2.
Let n N . Then, we have
Z ( Q , 0 , 0.973863 ) + i = 1 n log ( i 0.892869 Q n , 0 , 0.892869 ) i 0.973863 Q n , 0 , 0.973863 max M 1 2 , M 2 2 { i M 1 2 1 i 0.973863 Q n , 0 , 0.973863 log i M 1 2 1 i 0.892869 Q n , 0 , 0.892869 i M 1 2 1 i 0.973863 Q n , 0 , 0.973863 + i M 2 2 1 i 0.973863 Q n , 0 , 0.973863 log i M 2 2 1 i 0.892869 Q n , 0 , 0.892869 i M 2 2 1 i 0.973863 Q n , 0 , 0.973863 } 0 .
Remark 5.
In a similar way, applications of the remaining results from Section 3 can be given.

Author Contributions

All authors contributed equally to the final manuscript.

Funding

The research was supported by the Natural Science Foundation of China (Grants No. 61673169, No. 11601485, No. 11701176) and the Natural Science Foundation of the Department of Education of Zhejiang Province (Grant No. Y201635325).

Acknowledgments

The authors express their sincere thanks to the referees for careful reading of the manuscript and very helpful suggestions that improved the current manuscript substantially.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 623–656. [Google Scholar] [CrossRef]
  2. Latif, N.; Pečarić, Đ.; Pečarić, J. Majorization, Csiszár divergence and Zipf-Mandelbrot law. J. Inequal. Appl. 2017, 2017, 197. [Google Scholar] [CrossRef] [PubMed]
  3. Quastler, H. (Ed.) Essays on the Use of Information Theory in Biology; University of Illinois: Urbana, IL, USA, 1953. [Google Scholar]
  4. Sherwin, W.B. Entropy and information approaches to genetic diversity and its expression: Genomic geography. Entropy 2010, 12, 1765–1798. [Google Scholar] [CrossRef]
  5. Zhou, R.-X.; Cai, R.; Tong, G.-Q. Applications of entropy in finance: A review. Entropy 2013, 15, 4909–4931. [Google Scholar] [CrossRef]
  6. Guisado, J.L.; Jiménez-Morales, F.; Guerra, J.M. Application of shannon’s entropy to classify emergent behaviors in a simulation of laser dynamics. Math. Comput. Model. 2005, 42, 847–854. [Google Scholar] [CrossRef]
  7. Wellmann, J.F.; Regenauer-Lieb, K. Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models. Tectonophysics 2012, 526–529, 207–216. [Google Scholar] [CrossRef]
  8. Adil Khan, M.; Pečarić, Đ.; Pečarić, J. Bounds for Shannon and Zipf-Mandelbrot entropies. Math. Methods Appl. Sci. 2017, 40, 7316–7322. [Google Scholar] [CrossRef]
  9. Silagadze, Z.K. Citations and the Zipf-Mandelbrot’s law. Complex Syst. 1997, 11, 487–499. [Google Scholar]
  10. Kristian, G.; Jens, S. Zipf’s law for cities in the regions and the country. J. Econ. Geogr. 2010, 11, 667–686. [Google Scholar]
  11. Merriam, D.F.; Drew, L.J.; Schuenemeyer, J.H. Zipf’s law: A viable geological paradigm? Nat. Resour. Res. 2004, 13, 265–271. [Google Scholar] [CrossRef]
  12. Newman, M.E.J. Power laws, Pareto distributions and Zipf’s law. Contemp. Phys. 2005, 46, 323–351. [Google Scholar] [CrossRef]
  13. Parkins, S. Website Traffic and Zipf’s Law. 2015. Available online: http://www.linkedin.com/pulse/website-traffic-zipfs-law-shane-parkins?articleId=6064338455452807169 (accessed on 26 October 2015).
  14. Hefeeda, M.; Saleh, O. Traffic modeling and proportional partial caching for peer-to-peer systems. IEEE/ACM Trans. Netw. 2008, 16, 1447–1460. [Google Scholar] [CrossRef]
  15. Neukum, G.; Ivanov, B.A. Crater size distributions and impact probabilities on earth from lunar, terrestrial-planet, and asteroid cratering data. In Hazards Due to Comets & Asteroids; University Arizona Press: Tucson, AZ, USA, 1994. [Google Scholar]
  16. Montemurro, M.A. Beyond the Zipf-Mandelbrot law in quantitative linguistics. Phys. A Stat. Mech. Its Appl. 2001, 300, 567–578. [Google Scholar] [CrossRef]
  17. Manin, D.Y. Mandelbrot’s model for Zipf’s law: Can mandelbrot’s model explain Zipf’s law for language? J. Quant. Ling. 2009, 16, 274–285. [Google Scholar] [CrossRef]
  18. Mouillot, D.; Lepretre, A. Introduction of relative abundance distribution (RAD) indices, estimated from the rank-frequency diagrams (RFD), to assess changes in community diversity. Environ. Monit. Assess. 2000, 63, 279–295. [Google Scholar] [CrossRef]
  19. Lovričević, N.; Pečarić, Đ.; Pečarić, J. Zipf-Mandelbrot law, f-divergences and the Jensen-type interpolating inequalities. J. Inequal. Appl. 2018, 2018, 36. [Google Scholar]
  20. Adil Khan, M.; Ali Khan, G.; Alia, T.; Kilicman, A. On the refinement of Jensen’s inequality. Appl. Math. Comput. 2015, 262, 128–135. [Google Scholar]
  21. Dragomir, S.S. A refinement of Jensen’s inequality with applications for f-divergence measures. Taiwan. J. Math. 2010, 14, 153–164. [Google Scholar] [CrossRef]
  22. Adil Khan, M.; Pečaric, Đ.; Pečarić, J. On Zipf-Mandelbrot entropy. J. Comput. Appl. Math. 2019, 346, 192–204. [Google Scholar] [CrossRef]
  23. Dragomir, S.S. Bounds for the normalised Jensen functional. Bull. Aust. Math. Soc. 2006, 74, 471–478. [Google Scholar] [CrossRef]
  24. Abbaszadeh, S.; Gordji, M.E.; Pap, E.; Szakal, A. Jensen-type inequalities for Sugeno integral. Inf. Sci. 2017, 376, 148–157. [Google Scholar] [CrossRef]
  25. Dragomir, S.S. A new refinement of Jensen’s inequality in linear spaces with application. Math. Comput. Model. 2010, 52, 1497–1505. [Google Scholar] [CrossRef]
  26. Gelbukh, A.; Sidorov, G. Zipf and Heaps laws’ coefficients depend on language. Lect. Notes Comput. Sci. 2001, 2004, 332–335. [Google Scholar]

Share and Cite

MDPI and ACS Style

Adil Khan, M.; Al-sahwi, Z.M.; Chu, Y.-M. New Estimations for Shannon and Zipf–Mandelbrot Entropies. Entropy 2018, 20, 608. https://doi.org/10.3390/e20080608

AMA Style

Adil Khan M, Al-sahwi ZM, Chu Y-M. New Estimations for Shannon and Zipf–Mandelbrot Entropies. Entropy. 2018; 20(8):608. https://doi.org/10.3390/e20080608

Chicago/Turabian Style

Adil Khan, Muhammad, Zaid Mohammad Al-sahwi, and Yu-Ming Chu. 2018. "New Estimations for Shannon and Zipf–Mandelbrot Entropies" Entropy 20, no. 8: 608. https://doi.org/10.3390/e20080608

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop