Next Article in Journal / Special Issue
First Digit Oscillations
Previous Article in Journal
A Constrained Generalized Functional Linear Model for Multi-Loci Genetic Mapping
Previous Article in Special Issue
On the Mistaken Use of the Chi-Square Test in Benford’s Law
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Base Dependence of Benford Random Variables

Benford Applied Math, Salem, OR 97304, USA
Stats 2021, 4(3), 578-594; https://doi.org/10.3390/stats4030034
Submission received: 4 May 2021 / Revised: 5 June 2021 / Accepted: 7 June 2021 / Published: 2 July 2021
(This article belongs to the Special Issue Benford's Law(s) and Applications)

Abstract

:
A random variable X that is base b Benford will not in general be base c Benford when c b . This paper builds on two of my earlier papers and is an attempt to cast some light on the issue of base dependence. Following some introductory material, the “Benford spectrum” of a positive random variable is introduced and known analytic results about Benford spectra are summarized. Some standard machinery for a “Benford analysis” is introduced and combined with my method of “seed functions” to yield tools to analyze the base c Benford properties of a base b Benford random variable. Examples are generated by applying these general methods to several families of Benford random variables. Berger and Hill’s concept of “base-invariant significant digits” is discussed. Some potential extensions are sketched.

1. Introduction

My grandfather, the physicist Frank Benford for whom Benford’s Law is named, considered his “law of anomalous numbers” as evidence of a “real world” phenomenon. He realized that geometric sequences and exponential functions are generally base 10 Benford, and on this basis he wrote [1]:
“If the view is accepted that phenomena fall into geometric series, then it follows that the observed logarithmic relationship is not a result of the particular numerical system, with its base, 10, that we have elected to use. Any other base, such as 8, or 12, or 20, to select some of the numbers that have been suggested at various times, would lead to similar relationships; for the logarithmic scales of the new numerical system would be covered by equally spaced steps by the march of natural events. As has been pointed out before, the theory of anomalous numbers is really the theory of phenomena and events, and the numbers but play the poor part of lifeless symbols for living things.”
This argument seems compelling, and it might seem to apply to Benford random variables as well as to geometric sequences and exponential functions. It is therefore somewhat surprising to observe that a random variable that is base b Benford is not generally base c Benford when c b . We’ll see some examples shortly.
This paper builds on two of my earlier papers [2,3] and is an attempt to cast some light on the issue of base dependence. It’s organized as follows. Section 2 introduces the significand function and the fractional part notation and gives several logically equivalent definitions of “Benford random variable.” The base b first digit law is introduced, and several examples of random variables are presented that are Benford relative to one base but not to another. Section 3 introduces the “Benford spectrum” B X of a positive random variable X and summarizes some of the known analytical results that involve B X . Section 4 is a brief digression listing some facts about Fourier transforms that are needed in subsequent sections. Section 5 introduces some fundamental notation and results that provide a framework for the “Benford analysis” of a positive random variable. Section 6 combines the framework of Section 5 with my method of “seed functions” to develop the theory of the base c Benford properties of random variables X that are known to be Benford relative to base b, and Section 7 gives several examples of such random variables. Section 8 discusses Berger and Hill’s concept of “base-invariant significant digits.” Section 9 is a summary and a look ahead.

2. Benford Random Variables

The best way to define Benford random variables is via the significand function. Let b > 1 be a fixed “base.” Any x > 0 may be written uniquely in the form
x = s × b k where s [ 1 , b ) and k Z ,
and the base b significand of x, written S b x , is defined as this s. Hence,
x = S b x × b k where S b x [ 1 , b ) and k Z .
(Berger and Hill [4] define the significand of x for all x R , but we don’t require this generality.)
Now let X be a positive random variable; that is, Pr X > 0 = 1 . Assume that X is continuous with a probability density function (pdf).
Definition 1.
X is base b Benford (or X is b-Benford) if and only if the distribution function of S b X is given by
Pr S b X s = log b s for all s [ 1 , b ) .
Nothing written above requires that b be an integer. For this paragraph alone, we assume that b is an integer greater than or equal to 3. Let D 1 X denote the first (i.e., leftmost or most significant) digit of X in the base b representation of X, so D 1 X 1 , , b 1 . (Leading zeros, if there are any, are ignored.)
Proposition 1.
If X is b-Benford, then
Pr D 1 X = d = log b d + 1 d
for all d 1 , , b 1 . This is the “base b First Digit Law.” To prove it, it is sufficient to observe that D 1 X = d if and only if d S b X < d + 1 .
It’s useful at this point to introduce some non-standard notation. Let y R and recall that the “floor” of y, written y , is defined as the largest integer that is less than or equal to y. Define y as:
y y y
and note that 0 y < 1 for every y R . We’ll call y the fractional part of y, though if y < 0 this description is misleading.
If we take the logarithm base b of Equation (1) we obtain
log b x = log b S b x + k .
On the other hand,
log b x = log b x + log b x .
As log b x is necessarily an integer and 0 log b S b x < 1 , comparison of Equations (5) and (6) shows that
log b S b x = log b x and k = log b x
for any x > 0 .
Using Equation (7), we may rephrase Definition 1 in several logically equivalent ways.
Proposition 2.
X is b-Benford if and only if any one of the following four conditions is met.
( 1 ) Pr log b S b X log b s = log b s f o r a l l s [ 1 , b ) , ( 2 ) Pr log b X u = u f o r e v e r y u [ 0 , 1 ) , ( 3 ) log b X U [ 0 , 1 ) , ( 4 ) X = b Y w h e r e Y U [ 0 , 1 ) ,
where the notation “ W U [ 0 , 1 ) ” means that W is uniformly distributed on the half open interval [ 0 , 1 ) . (More generally, I use the symbol “∼” to mean “is distributed as.” Hence, for example, “ X f ” means that X is distributed with pdf f, and “ X 1 X 2 ” means that X 1 and X 2 have the same distribution.)
For any random variable Y, if Y U [ 0 , 1 ) we sometimes say that Y is “uniformly distributed modulo one,” abbreviated “u.d. mod 1.” Hence X is b-Benford if and only if log b X is u.d. mod 1.
With this background we can now give a couple of examples of random variables that are Benford with respect to one base but not to another. Let Y U [ 0 , 1 ) .
Example 1.
Let X 10 Y , so X is 10-Benford. But it’s not 8-Benford as it fails to satisfy the base 8 First Digit Law. To see this, note that the support of X is [ 1 , 10 ) , and let D 1 X denote the first digit in the base 8 representation of X. Then
Pr D 1 X = 1 = Pr 1 X < 2 + Pr 8 X < 10 = log 10 2 + log 10 5 / 4 0.3979 ,
whereas
log 8 1 + 1 1 = log 8 2 = 1 3 .
Example 2.
Let Y be as above, but now let X 8 Y , so X is 8-Benford. Note that the support of X is [ 1 , 8 ) . Let D 1 X denote the first digit in the base 10 representation of X. Hence Pr D 1 X 8 , 9 = 0 , whereas
log 10 9 8 + log 10 10 9 0.09691 .
Hence, X fails to satisfy the base 10 First Digit Law.

3. The Benford Spectrum

Let X be a positive random variable.
Definition 2.
Following Wójcik [5], the “Benford spectrum” of X, denoted B X , is defined as
B X b 1 , : X is b - Benford .
The Benford spectrum of X may be empty. In fact, the Benford spectra of essentially all the standard random variables used in statistics are empty.
This section summarizes some of the known facts about Benford spectra. While proofs are provided for Proposition 4 and 6, I’m just going to provide citations for proofs of the other propositions.
Proposition 3
(Berger and Hill [4], page 44, Proposition 4.3 (iii)). A random variable Y is u.d. mod 1 if and only if k Y + c is u.d. mod 1 for every integer k 0 and every c R .
Proposition 4
(Whittaker [6]). If b B X , then b m B X for all m N . In other words, if X is b-Benford, then X is b m -Benford for all m N .
Proof. 
Suppose that X is b-Benford, so X = b Y where Y is u.d. mod 1. Hence, for any m N ,
X = b 1 / m m Y .
As b 1 / m > 1 and m Y is u.d. mod 1 by Proposition 3, it follows that b 1 / m B X .  □
Proposition 5.
If B X is non-empty, then it is bounded above. In other words, no random variable can be b-Benford for arbitrarily large b. Citations: Refs. [3,5,6].
Proposition 6.
If X is b-Benford and c > 0 , then c X is b-Benford.
Proof. 
As X is b-Benford, Y log b X is u.d. mod 1. As log b c X = Y + log b c is u.d. mod 1 from Proposition 3, it follows that c X is b-Benford.  □
We say of this result that the Benford property of a random variable is “scale-invariant.”
Proposition 7.
Suppose that X and W are independent positive random variables and that X is b-Benford. Then the product X W is also b-Benford. Citations: Refs. [3,4,5].
Proposition 8
(a corollary of Proposition 7). If X and W are independent positive random variables, then
B X B W B X Y .
So far, the spectra we’ve seen are at most countably infinite. One may wonder if there exists a random variable with an uncountable spectrum. Whittaker showed by an example that such a random variable exists. Let b > 1 be given. Define g : R R by
g y 1 cos 2 π y 2 π 2 y 2 .
It may be shown that g is a legitimate pdf, and Y is u.d. mod 1 if Y g . Hence X b Y is b-Benford. (This is what I’ve called Whittaker’s random variable.) For any c > 1 , define Y c log c X . It may then be shown that Y c is u.d. mod 1 (and hence that X is c-Benford) if and only if c b . In summary, B X = ( 1 , b ] . Citations: Refs. [3,5,6].

4. Digression: Fourier Transforms

Before going much further, we need to list some facts about Fourier transforms. Let g denote the pdf of a real valued random variable Y . The Fourier transform of g is the function g ^ : R C defined as
g ^ ξ e 2 π i ξ y g y d y = E e 2 π i ξ Y = u ξ i v ξ
for all ξ R , where
u ξ cos 2 π ξ y g y d y = E cos 2 π ξ Y and v ξ sin 2 π ξ y g y d y = E sin 2 π ξ Y .
Note that u is an even function and v is an odd function, and hence that g ^ ξ = g ^ ξ ¯ where the overbar denotes complex conjugation. Though the Fourier transform g ^ ξ is generally complex valued, it is real valued if g is an even function, i.e., if Y is symmetrically distributed around the origin. Hence, if g is an even function, then g ^ is an even function. Finally, note that g ^ 0 = g y d y = 1 .
The following fact is very useful.
Proposition 9
(shift and scale with random variables). Suppose that W = σ Y + μ where σ > 0 . Suppose that Y g and let h denote the pdf of W. We may obtain h from g and h ^ from g ^ as follows:
h w = 1 σ g w μ σ
(proof left to reader) and
h ^ ξ = E e 2 π i ξ W = E e 2 π i ξ σ Y + μ = e 2 π i ξ μ g ^ σ ξ .
If μ = 0 , Equation (13) becomes h ^ ξ = g ^ σ ξ .
Appendix A of this paper contains a table of selected Fourier transforms.

5. A Framework for Benford Analysis

Suppose that X is a positive random variable and that b > 1 . We may wish to know if X is b-Benford, and if it’s not by how far does it differ from “Benfordness.” I call an attempt to answer these and related questions a “Benford analysis.” In this section I establish some notation I’ll use for a Benford analysis, and give some fundamental results that allow us to proceed.
First, define
Y log b X = Λ b ln X where Λ b 1 ln b .
Next, let
g denote the pdf of Y , g ˜ denote the pdf of Y .
Given g ˜ we may answer the two questions given above. (1) X is b-Benford if and only if g ˜ u = 1 for almost all u [ 0 , 1 ) . (2) If X is not b-Benford, we may measure its deviation from Benfordness by any measure of the deviation of g ˜ from a uniform distribution. For example, if g ˜ is continuous, or if its only discontinuities are “jumps,” we could use the infinity norm:
g ˜ 1 sup g ˜ u 1 : 0 u < 1 .
We need a way to find g ˜ from g. Under a reasonable assumption, it may be shown that
g ˜ u = k Z g k + u
for all u [ 0 , 1 ) . The “reasonable assumption” is described in [2]. In this paper we’ll just accept Equation (16) as given.
Although Equation (16) is fundamental for a Benford analysis of X, it is not very useful for finding the answers to some analytical questions one may ask. Fourier analysis provides the tools needed to continue the analysis. It may be shown [3] that the Fourier series representation of g ˜ u is
g ˜ u = n Z g ^ n e 2 π i n u for all u [ 0 , 1 ) .
At first sight this expression may not seem very useful; the series of real valued functions in Equation (16) has been replaced by a series of complex valued functions multiplied by complex coefficients. But g ^ 0 = 1 , and Equation (17) may be written as
g ˜ u = 1 + n N g ^ n e 2 π i n u + g ^ n e 2 π i n u .
As g ^ n e 2 π i n u is the complex conjugate of g ^ n e 2 π i n u , it follows that each term in brackets in Equation (18) is real valued. In fact,
g ^ n e 2 π i n u + g ^ n e 2 π i n u = a n cos 2 π n u + b n sin 2 π n u
where
a n = g ^ n + g ^ n = 2 cos 2 π n y g y d y , b n = i g ^ n + i g ^ n = 2 sin 2 π n y g y d y .
Combining Equations (18) and (19) yields
g ˜ u = 1 + n N a n cos 2 π n u + b n sin 2 π n u .
In practice, it is often convenient to go one step further and rewrite Equation (21) as
g ˜ u = 1 + n N A n cos 2 π n u θ n
where A n satisfies
A n 2 = a n 2 + b n 2
and θ n is any solution to
cos 2 π n θ n = a n A n and sin 2 π n θ n = b n A n .
The parameters A n and θ n are not uniquely determined by Equations (23) and (24), but in practice natural candidates for A n and θ n often present themselves. I’ll call A n an “amplitude” (though this term generally refers to A n ) and θ n a “phase.”
Proposition 10.
The pdf g ˜ is that of a U [ 0 , 1 ) random variable if and only if g ^ n = 0 for all n N . Equivalently, the pdf g ˜ is that of a U [ 0 , 1 ) random variable if and only if A n = 0 for all n N .
Proof. 
The first assertion follows fromEquation (18) combined with g ^ n = g ^ n ¯ for any n N . The second assertion follows from Equation (22).  □
Proposition 11.
A n = 2 g ^ n for all n N .
Proof. 
Solving Equation (20) for g ^ n and g ^ n , we find
g ^ n = 1 2 a n i b n , g ^ n = 1 2 a n + i b n .
It follows that
A n 2 = a n 2 + b n 2 = 4 g ^ n g ^ n = 4 g ^ n 2 A n = 2 g ^ n .
 □

6. Base Dependence: Theory

Suppose we’re given a u.d. mod 1 random variable Y with pdf g and b > 1 . Then X b Y is b-Benford. Now let c > 1 be another possible base and define Y c log c X . Let g c and g ˜ c denote the pdfs of Y c and Y c , respectively, and let g ^ c denote the Fourier transform of g c . My aim in this section is to present tools that allow one to study how g ˜ c varies as a function of c.
The first thing to observe is that Y c is proportional to Y:
Y c = ln X ln c = ln b ln c · ln X ln b = ρ Y where ρ ln b ln c .
It then follows from Proposition 9 that
g ^ c ζ = g ^ ρ ζ
for any ζ R .
To use Equation (28) we first need to say something about g. I introduced “seed functions” in [2] and showed that every pdf g of a u.d. mod 1 random variable may be written
g y = H y H y 1
for every y R , where H is a seed function. Hence
g ^ ξ = e 2 π i ξ y H y H y 1 d y .
Under various assumptions about H, we may combine Equations (28) and (30) to compute g ^ c n for all n Z , and given g ^ c n we may compute A n and θ n in the expression
g ^ c n e 2 π i n u + g ^ c n e 2 π i n u = A n cos 2 π n u θ n
for all n N , and thereby derive g ˜ . In this section I’ll partially carry out this program for two broad classes of seed functions: (1) H is a step function, and (2) H is increasing and absolutely continuous.
Example 3.
Suppose that H is the following step function:
H y = 0 if y < 1 2 , 1 if y 1 2 .
This seed function implies that
g y = 1 if y [ 1 2 , 1 2 ) , 0 otherwise .
Hence, from the table of Fourier transforms in Appendix A,
g ^ ξ = sin π ξ π ξ
(where it is understood that g ^ 0 = 1 ). Combining this with Equation (28) yields
g ^ c n = sin π ρ n π ρ n
for any n 0 . From Proposition 10 we know that Y c will be c-Benford if and only if g ^ c n = 0 for every n N , and from Equation (33) it’s clear that this happens if and only if ρ is an integer. But
ρ = ln b ln c = m c = b 1 / m
for every m N . Hence, Y c is c-Benford if and only if c is an integral root of b. This result agrees with Proposition 4.
Certain features of this result are repeated with every seed function H we consider. In particular, we always find that g ^ c n = 0 for all n N whenever c is an integral root of b. Also, note that g ^ c n depends on c entirely through the parameter ρ .
Equation (33) implies that
A n = 2 sin π ρ n π ρ n , θ n = 0
for this example.
Example 4.
To generalize Example 3 slightly, suppose that H jumps from 0 to 1 at μ 1 2 for some μ R . The pdf g implied by this seed function is just that given by Equation (31) shifted right by μ. From Proposition 9 and Equation (32) we obtain
g ^ ξ = e 2 π i ξ μ sin π ξ π ξ
and hence
g ^ c n = e 2 π i ρ n μ sin π ρ n π ρ n
for any n N . Note that g ^ c n = 0 if and only if c = b 1 / m for some m N . Equation (36) implies that
A n = 2 sin π ρ n π ρ n , θ n = ρ μ
for this example. The only effect of including μ is to change the phase. Note that the phase does not depend on n.
Now assume that H is increasing and absolutely continuous. This assumption makes H mathematically equivalent to the distribution function of an absolutely continuous random variable. Under this assumption H is differentiable almost everywhere and h y H y 0 . We want to evaluate the integral
g ^ ξ = e 2 π i ξ y H y H y 1 d y = E e 2 π i ξ Y .
It’s clear from the rightmost expression in this equation that g ^ 0 = 1 . When ξ 0 , an initial integration by parts yields
g ^ ξ = 1 2 π i ξ e 2 π i ξ y h y h y 1 d y .
Evaluating this integral,
g ^ ξ = 1 2 π i ξ 1 e 2 π i ξ h ^ ξ = e i π ξ 2 π i ξ e i π ξ e i π ξ h ^ ξ = e i π ξ π ξ sin π ξ h ^ ξ .
Hence,
g ^ c n = e i π ρ n π ρ n sin π ρ n h ^ ρ n
for any n 0 . We see once again that g ^ c n = 0 for all n N whenever c is an integral root of b. In addition, there’s another possibility; g ^ c n = 0 for all n N if h ^ ρ n = 0 for all n N . This is essentially the possibility that was exploited in the construction of Whittaker’s random variable. We’ll return to this point in a moment.
Example 5.
Still working with the assumption that H is increasing and absolutely continuous, we now make the additional assumption that h is an even function, which implies that h ^ is an even function. Under these assumptions, Equation (38) implies that
A n = 2 sin π ρ n h ^ ρ n π ρ n , θ n = 1 2 ρ .
Example 6.
In Example 5 we assume that h is even, so that it’s symmetrical around the point y = 0 . Now assume that h is symmetrical around the point y = μ for some μ R . Define h 0 y h ( y + μ ) so h 0 is an even function. It is easy to show that h ^ ξ = e 2 π i ξ μ h ^ 0 ξ . Combining this fact with Equation (38) yields
g ^ c n = e i π ρ n π ρ n sin π ρ n e 2 π i ρ n μ h ^ 0 ρ n = e 2 π i ρ n 1 2 + μ π ρ n sin π ρ n h ^ 0 ρ n .
Equation (40) implies that
A n = 2 sin π ρ n h ^ 0 ρ n π ρ n , θ n = ρ 1 2 + μ
for Example 6. We observe that the phase depends on μ and ρ, but not on n.
Note that A n and θ n depend on c only through ρ in all of these examples. It’s useful to keep in mind that ρ depends on c as shown in Figure 1 (where I’ve let b 16 ). In words, ρ increases from 1 to ∞ as c decreases from b towards 1.

7. Base Dependence: Examples

Equations (38) and (41) provide the scaffolding for the construction of g ˜ , but require insertion of an actual formula for h ^ in Equation (38) or h ^ 0 in Equation (41) for completion. This section completes this construction using the table of Fourier transforms in Appendix A.
Every distribution function is a legitimate seed function. Hence every Fourier transform given in Appendix A is a legitimate candidate for h ^ . Moreover, four of the distributions in Appendix A (the normal, Laplace, Cauchy, and logistic) are even functions, and their Fourier transforms are therefore legitimate candidates for h ^ 0 . All four of these distributions have fixed variances, however, and it is desirable to append a “scale” parameter σ that allows these variances to be adjusted. Proposition 9 justifies the following expanded table of Fourier transforms.
Example 7.
Gauss-Benford random variables. Suppose that H is the distribution function of a N μ , σ random variable, i.e., a N 0 , σ random variable shifted μ to the right. I’ll call the random variable X implied by this seed function a “Gauss-Benford” random variable. Combining Equation (41) with the appropriate entry from Table 1, we obtain
A n = 2 sin π ρ n π ρ n exp 2 π 2 σ 2 ρ 2 n 2 , θ n = ρ 1 2 + μ .
As exp 2 π 2 σ 2 ρ 2 n 2 > 0 , it follows that B X = b 1 / m : m N . Let
A n * = 2 π ρ n exp 2 π 2 σ 2 ρ 2 n 2 ,
so A n = sin π ρ n A n * . Viewed as a function of n or ρ, A n oscillates within an envelope A n * , A n * , and A n A n * for all n , σ , and ρ. Asymptotically, letting any of the parameters n, ρ, or σ implies that A n * 0 . Equation (43) implies that A 1 * > A 2 * > . The descent of A n * towards zero with increases in n, ρ, or σ is extremely rapid, and A 1 * can be small with even low values of ρ and σ. For example, letting ρ = σ = 1 implies that A 1 * 1.7 × 10 9 . In this case, the graph of g ˜ is visually indistinguishable from that of a uniform distribution on [ 0 , 1 ) and we would have to conclude that X is “effectively” c-Benford for all c b .
Example 8.
Laplace-Benford random variables. Now suppose that H is the distribution function of a Laplace μ , σ random variable. I’ll call the random variable X implied by this seed function a “Laplace-Benford” random variable. Combining Equation (41) with the appropriate entry from Table 1, we obtain
A n = 2 sin π ρ n π ρ n · 1 1 + 4 π 2 σ 2 ρ 2 n 2 , θ n = ρ 1 2 + μ .
As 1 + 4 π 2 σ 2 ρ 2 n 2 1 > 0 , it follows that B X = b 1 / m : m N . Let
A n * = 2 π ρ n · 1 1 + 4 π 2 σ 2 ρ 2 n 2 ,
so A n = sin π ρ n A n * and A n A n * for all n , σ , and ρ. Asymptotically, letting any of the parameters n, ρ, or σ implies that A n * 0 . Though the asymptotic limits of Equations (43) and (45) are identical, the approach of A n * to zero (as n, ρ, or σ increases) is very much slower for Equation (45) than it is for Equation (43).
Example 9.
Cauchy-Benford random variables. Now suppose that H is the distribution function of a Cauchy μ , σ random variable. I’ll call the random variable X implied by this seed function a “Cauchy-Benford” random variable. Combining Equation (41) with the appropriate entry from Table 1, we obtain
A n = 2 sin π ρ n π ρ n e 2 π σ ρ n , θ n = ρ 1 2 + μ .
As e 2 π σ ρ n > 0 , it follows that B X = b 1 / m : m N . Let
A n * = 2 π ρ n e 2 π σ ρ n
so A n = sin π ρ n A n * . The asymptotic behavior for this A n * is identical to that of Equations (43) or (45). The rate of descent of A n * towards zero is intermediate between that of a Gauss-Benford random variable and that of a Laplace-Benford random variable.
Example 10.
Logistic-Benford random variables. For our final example of a symmetric seed function, let H be the distribution function of a logistic μ , σ random variable. I’ll call the random variable X implied by this seed function a “Logistic-Benford” random variable. Combining Equation (41) with the appropriate entry from Table 1, we obtain
A n = 2 sin π ρ n π ρ n · 2 π 2 σ ρ n sinh 2 π 2 σ ρ n = sin π ρ n A n * , θ n = ρ 1 2 + μ
where
A n * = 4 π σ sinh 2 π 2 σ ρ n > 0 .
The asymptotic behavior for this A n * is identical to that of the previous three random variables. The rate of convergence of A n * to zero is comparable to that of a Cauchy-Benford random variable.
Example 11.
Gamma-Benford random variables. Suppose that the seed function H is the distribution function of a Γ α , β random variable. I’ll call the random variable X implied by this seed function a “Gamma-Benford” random variable. This seed function is increasing and absolutely continuous, but h isnot symmetrically distributed around any point μ, so Equation (41) does not apply. Combining Equation (38) with the appropriate entry from the table of Fourier transforms found in Appendix A, we obtain
g ^ c n = e i π ρ n π ρ n sin π ρ n 1 + 2 π i β ρ n α
for every integer n 0 . To make headway, define
z n 1 + 2 π i β ρ n = 1 + i y n where y n 2 π β ρ n ,
and rewrite z n in polar form, so
z n = r n e i ϕ n where r n 1 + y n 2 , tan ϕ n = y n .
Hence,
g ^ c n = e i π ρ n π ρ n sin π ρ n r n α e i α ϕ n = sin π ρ n π ρ n r n α e 2 π i n θ n
where
θ n 1 2 ρ + α ϕ n 2 π n .
Hence,
g ^ c n e 2 π i n u + g ^ c n e 2 π i n u = sin π ρ n π ρ n r n α e 2 π i n u θ n + e 2 π i n u θ n = 2 sin π ρ n π ρ n r n α cos 2 π n u θ n = A n cos 2 π n u θ n
where
A n 2 sin π ρ n π ρ n r n α .
To “compare and contrast” these results with those with symmetric distributions, we make the following observations. (1) The presence of sin π ρ n in the numerator of Equation (56), combined with r n α > 0 , implies that A n = 0 for all n N if and only if ρ is an integer, i.e., if and only if c is an integral root of b. (2) Unlike our earlier results, where the phase θ n is given by Equation (41) and does not depend on n, for a Gamma-Benford random variable the phase is given by Equation (54). It’s easy to show that ϕ n 1 2 π as n , and hence that θ n 1 2 ρ . (3) It’s easy to show that
A n * 2 π ρ n r n α 0 as ρ n .
Example 12.
Whittaker-Benford random variables: For our final example, we return to Equation (38),
g ^ c n = e i π ρ n π ρ n sin π ρ n h ^ ρ n ,
which holds for all increasing and absolutely continuous seed functions. All of our previous examples have made use of the fact that sin π ρ n = 0 for all n N whenever ρ is an integer. We now consider another possibility: g ^ c n = 0 for all n N if h ^ ρ n = 0 for all n N . I’ll say that a b-Benford random variable X satisfying this condition is a “Whittaker-Benford” random variable. The key here is to find h ^ with bounded support, and the simplest such h ^ is triangular:
h ^ ξ = max 0 , 1 ξ γ ,
where γ > 0 . With this h ^ it’s clear that h ^ ρ n = 0 for all n N if ρ γ . Note that ρ γ c b 1 / γ . Therefore, the Benford spectrum B X of a Whittaker-Benford random variable X with h ^ given by Equation (57) has two (overlapping) components: B X = B X d B X c where
B X d b 1 / m : m N , B X c ( 1 , b 1 / γ ] .
(The superscript d stands for “discrete,” and the superscript c stands for “continuous.”) If γ 1 , then B X d B X c . For example, if γ = 1 2 then B X = B X c = ( 1 , b 2 ] . On the other hand, if γ > 1 , then B X c = ( 1 , b 1 / γ ] ( 1 , b ] , so B X equals the disjoint union of the discrete set ( B X d B X c ) and the continuous set B X c .
The function h that yields h ^ given by Equation (57) is
h y = 1 cos 2 π γ y 2 γ π 2 y 2 .

8. On “Base-Invariant Significant Digits”

I wish to acknowledge that I first encountered many of the ideas discussed in this section in Michal Wójcik’s admirable paper [5]. All citations to Berger and Hill in this section are to their text, reference [4].
Proposition 12.
If X is b-Benford, then X n is b-Benford for any n N .
Proof. 
As X is b-Benford, X = b Y where Y is u.d. mod 1. Hence X n = b n Y . But n Y is u.d. mod 1 by Proposition 3. Therefore X n is b-Benford.  □
Corollary 1.
As X n = b n Y , it follows that X n is b n -Benford if X is b-Benford.
Corollary 2.
If X is b-Benford, then S b X S b X n for any n N . This follows from Definition 1.
One may wonder if the converse of Corollary 2, namely
if S b X S b X n for all n N , then X is b - Benford ,
is true. The answer is “no.” Here’s a counterexample. If X 1 , then S b X S b X n for all n N , but X is not b-Benford. In fact, any X of the form b m where m Z is a counterexample, as S b X = 1 = S b X n . However, we may show the following:
Proposition 13.
If S b X S b X n for all n N , then either X is b-Benford, or S b X = 1 . We’ll provide a proof in a moment.
Definition 3.
Let
S b X S b X n for all n N
be called Wójcik’s condition.
Here’s another way to state Proposition 13. (This is Wójcik’s Theorem 19.)
Proposition 14.
X satisfies Wójcik’s condition if and only if the distribution function of S b X is given by
Pr S b X s = q + 1 q log b s
for some q 0 , 1 and for all s [ 1 , b ) .
To prove Proposition 13 or 14, we first massage Wójcik’s condition into an alternative form. Let X be a positive random variable and define Y log b X . For all n N ,
S b X S b X n log b S b X log b S b X n log b X log b X n = n log b X Y n Y = n Y
where the last equality follows from the identity n y = n y + n y = n y for any y R .
Berger and Hill ([4], Lemma 5.15, page 77) show the following.
Proposition 15.
For any random variable Y, the relation Y n Y for all n N holds if and only if
Pr Y u = q + 1 q u for all u [ 0 , 1 )
for some q 0 , 1 .
Propositions 13 and 14 are straightforward corollaries of Proposition 15.
I bring these facts to the reader’s attention because Wójcik’s condition is effectively equivalent to Berger and Hill’s notion of “base-invariant significant digits” and sheds some light on this notion. (I say “effectively equivalent” as Berger and Hill’s concept applies to a probability measure P, whereas Wójcik’s condition applies to a random variable X.)
Here’s Berger and Hill’s definition (Definition 5.10, page 75). Let A S be a σ -algebra on R + . A probability measure P on R + , A has base-invariant significant digits if P A = P A 1 / n for all A S and all n N .
Here’s a guide to the symbols used in this definition. (1) S is the σ -algebra generated by the significand function S b . (2) R + 0 , , the set of strictly positive real numbers. (3) For any A R + and n N , A 1 / n x > 0 : x n A . Also, it’s useful at this point to introduce one more bit of non-standard notation used by Berger and Hill: for every x R and every set C R , let x C x c : c C .
The following proposition (showing the effective equivalence of Wójcik’s condition and Berger and Hill’s base-invariant significant digits) is the major result of this section.
Proposition 16.
Suppose that X R + , B R + , P . Then S b X S b X n for all n N if and only if P has base-invariant significant digits.
Proof. 
We begin by proving that Wójcik’s condition holds whenever P has base-invariant significant digits. Suppose that A S . From the definition of A 1 / n , we have X A 1 / n X n A . Hence,
P A 1 / n = Pr X A 1 / n = Pr X n A .
If P has base-invariant significant digits, then
P A 1 / n = P A = Pr X A .
Combining Equations (64) and (65), we see that
Pr X A = Pr X n A
whenever P has base-invariant significant digits. As A S there exists a set A 0 B [ 1 , b ) such that
A = k Z b k A 0 .
In fact, A 0 = S b A S b x : x A . Hence
X A S b X A 0 , X n A S b X n A 0 .
Combining Equations (66) and (68), we conclude that
Pr S b X A 0 = Pr S b X n A 0 .
As this equation holds for every A 0 B [ 1 , b ) , we conclude that S b X S b X n whenever X R + , B R + , P and P has base-invariant significant digits.
To prove that Wójcik’s condition implies that P has base-invariant significant digits, we essentially reverse this chain of logic. Wójcik’s condition implies Equation (69) for any A 0 B [ 1 , b ) , which in turn implies Equation (66) for A given by Equation (67). But Pr X A = P A and Pr X n A = P A 1 / n , so Equation (66) implies that P A = P A 1 / n . As A 0 was an arbitrary element of B [ 1 , b ) , the equation P A = P A 1 / n holds for A, an arbitrary element of S , and the proof is complete.  □
Berger and Hill state the following theorem (Theorem 5.13, page 76). A probability measure P on R + , A with A S has base-invariant significant digits if and only if, for some q 0 , 1 ,
P A = q δ 1 A + 1 q B A for every A S .
(The meaning of the “Dirac measure” δ 1 is given on page 22 of their book, and the meaning of the “Benford measure” B is given on page 32.)
In the light of Proposition 16, it can be seen that Berger and Hill’s Theorem 5.13 is equivalent to Proposition 14 given above.
I conclude this section with a personal opinion about Berger and Hill’s exposition. I think that the terminology “base-invariant” they chose for their concept is a misnomer. There is only one base (b) in the definition, and their concept of “base-invariant” significant digits tells us nothing about the Benford properties of alternative bases for a b-Benford random variable. Hence, the label “base-invariant” they chose for their concept seems misleading and I believe they really should give it a different name.

9. Conclusions and Prospect

Let Y be a u.d. mod 1 random variable with pdf g, let b > 1 , and define X b Y , so X is b-Benford. Without loss of generality we may assume that
g y = H y H y 1 for any y R ,
where H is a seed function. Let c > 1 . In principle, the machinery introduced in Section 6 allows one to investigate the dependence of the distribution of log c X on c. In practice, I’ve carried out this investigation only for seed functions of the first two types in the following list of classes of seed functions.
(1)
Step functions that jump from 0 to 1 in a single step.
(2)
Increasing functions that are absolutely continuous.
(3)
Step functions that increase from 0 to 1 at a finite or countably infinite number of “points of jump.”
(4)
Convex combinations of seed functions in classes (2) and (3).
(5)
“Singular” distribution functions. These functions are increasing and continuous, but not absolutely continuous. The Cantor function is the best known example.
(6)
Seed functions satisfy a condition I call “unit interval increasing.” Every increasing function is unit interval increasing, but not conversely. That is, a function H may be unit interval increasing, but not everywhere increasing. Several examples of such seed functions are given in [2].
My intuition suggests that seed functions of types (3) and (4) will offer no additional conceptual difficulties, though they will certainly complicate the algebra. I’ll leave the investigation of seed functions of classes (5) and (6) to the reader.
With X and c defined as above, let g ˜ c denote the pdf of log c X . If X is c-Benford, and if g ˜ c is continuous or has only “jump” discontinuities, then
g ˜ c 1 = 0 .
Hence, c is in the Benford spectrum B X if and only if Equation (70) is satisfied. For almost all random variables X, the Benford spectrum B X is empty. We might want to say that X is “effectively” c-Benford if
g ˜ c 1 < ϵ
for some small number ϵ . If we define the “effective” Benford spectrum of X to be the set
B X , ϵ c > 1 : g ˜ c 1 < ϵ ,
then B X B X , ϵ . In general, I suggest, the effective spectrum will be a much larger set than the spectrum.
The machinery described in Section 5 to carry out a “Benford analysis” helps us determine whether or not the criterion of Equation (71) is satisfied. In Section 7 I suggested that a Gauss-Benford random variable should be regarded as effectively c-Benford if the product ρ σ is large enough. In [3] I suggested that a lognormal random variable, which is not b-Benford for any b, should be regarded as effectively c-Benford if Λ c σ is large enough.
I leave further investigation of effectively Benford random variables to the reader.

Funding

This research received no external funding.

Acknowledgments

I’d like to thank Kenneth Ross for thoughtful comments on earlier drafts of this paper, and three anonymous referees for useful suggestions. I’d also like to thank William Davis and Don Lemons for their heroic efforts to convert my EXP document into an acceptable LaTeX form.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A. A Small Table of Fourier Transforms

Feller [7] gives a table of characteristic functions of selected probability density functions. I’ve adapted his table to give the Fourier transforms of 8 of his 10 densities, and added a row for an additional pdf (the logistic).
Table A1. Fourier transforms of selected probability density functions.
Table A1. Fourier transforms of selected probability density functions.
No.NameDensity g x IntervalFourier Transform  g ^ ξ
1 N 0 , 1 ( 2 π ) 1 / 2 e x 2 / 2 R exp 2 π 2 ξ 2
2 U a , a 1 / 2 a a , a sin 2 π a ξ 2 π a ξ
3 U 0 , a 1 / a 0 , a 1 exp 2 π i a ξ 2 π i a ξ
4Triangular 1 a 1 x a x a 1 cos 2 π a ξ 2 π 2 a 2 ξ 2
5Dual of 4 1 cos 2 π a x 2 a π 2 x 2 R max 0 , 1 ξ a
6 Γ α , β 1 Γ α β α x α 1 e x / β x > 0 1 + 2 π i β ξ α
7Laplace 0 , 1 1 2 e x R 1 1 + 4 π 2 ξ 2
8Cauchy 0 , 1 1 π 1 1 + x 2 R e 2 π ξ
9Logistic 0 , 1 e x / 2 + e x / 2 2 R 2 π 2 ξ sinh 2 π 2 ξ

References

  1. Benford, F. The Law of Anomalous Numbers. Proc. Am. Philos. Soc. 1938, 78, 551–572. [Google Scholar]
  2. Benford, F.A. Construction of Benford Random Variables: Generators and Seed Functions. arXiv 2020, arXiv:1609.04852. [Google Scholar]
  3. Benford, F.A. Fourier Analysis and Benford Random Variables. arXiv 2020, arXiv:2006.07136. [Google Scholar]
  4. Berger, A.; Theodore, H. An Introduction to Benford’s Law; Princeton University Press: Princeton, NJ, USA, 2015. [Google Scholar]
  5. Wójcik, M. Notes on Scale-Invariance and Base-Invariance for Benford’s Law. arXiv 2013, arXiv:1307.3620. [Google Scholar]
  6. Whittaker, J. On Scale-Invariant Distributions. SIAM J. Appl. Math. 1983, 43, 257–267. [Google Scholar] [CrossRef]
  7. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1971; Volume II. [Google Scholar]
Figure 1. ρ as a function of c.
Figure 1. ρ as a function of c.
Stats 04 00034 g001
Table 1. Fourier transforms of selected even density functions with a scale parameter.
Table 1. Fourier transforms of selected even density functions with a scale parameter.
Name h 0 y h ^ 0 ξ
N 0 , σ ( 2 π σ 2 ) 1 / 2 e y 2 / 2 σ 2 exp 2 π 2 σ 2 ξ 2
Laplace 0 , σ 1 2 σ e y / σ 1 1 + 4 π 2 σ 2 ξ 2
Cauchy 0 , σ 1 π σ 1 + y 2 σ 2 1 e 2 π σ ξ
Logistic 0 , σ 1 σ e y / 2 σ + e y / 2 σ 2 2 π 2 σ ξ sinh 2 π 2 σ ξ
Note: Among these four distributions, σ is the standard deviation of the rescaled random variable only for the normal distribution N 0 , σ .
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Benford, F. Base Dependence of Benford Random Variables. Stats 2021, 4, 578-594. https://doi.org/10.3390/stats4030034

AMA Style

Benford F. Base Dependence of Benford Random Variables. Stats. 2021; 4(3):578-594. https://doi.org/10.3390/stats4030034

Chicago/Turabian Style

Benford, Frank. 2021. "Base Dependence of Benford Random Variables" Stats 4, no. 3: 578-594. https://doi.org/10.3390/stats4030034

APA Style

Benford, F. (2021). Base Dependence of Benford Random Variables. Stats, 4(3), 578-594. https://doi.org/10.3390/stats4030034

Article Metrics

Back to TopTop