Benford’s law is nowadays extremely popular (see e.g. http://en.wikipedia.org/…). It is usually claimed that, for a given set data set, changing units does not affect the distribution of the first digit. Thus, it should be related to scale invariant distributions. Heuristically, scale (or unit) invariance means that the density of the measure (or probability function) should be proportional to . Thus, because densities integrate to 1, the proportionality coefficient has to be , and therefore, should satisfy the following functional equation, , for all in and in . The solution of this functional equation is , I guess this can be proved easily solving ordinary differential equation
Now if denotes the first digit of , in base 10, then
Which is the so-called Benford’s law. So, this distribution looks like that
> (benford=log(1+1/(1:9))/log(10)) [1] 0.30103000 0.17609126 0.12493874 0.09691001 0.07918125 [6] 0.06694679 0.05799195 0.05115252 0.04575749 > names(benford)=1:9 > sum(benford) [1] 1 > barplot(benford,col="white",ylim=c(-.045,.3)) > abline(h=0)
To compute the empirical distribution from a sample, use the following function
> firstdigit=function(x){ + if(x>=1){x=as.numeric(substr(as.character(x),1,1)); zero=FALSE} + if(x<1){zero=TRUE} + while(zero==TRUE){ + x=x*10; zero=FALSE + if(trunc(x)==0){zero=TRUE} + } + return(trunc(x)) + }
and then
> Xd=sapply(X,firstdigit) > table(Xd)/1000
In Benford’s Law: An Empirical Investigation and a Novel Explanation, we can read
It is not a mathematical article, so do not expect any formal proof in this paper. At least, we can run monte carlo simulation, and see what’s going on if we generate samples from a lognormal distribution with variance . For instance, with a unit variance,
> set.seed(1) > s=1 > X=rlnorm(n=1000,0,s) > Xd=sapply(X,firstdigit) > table(Xd)/1000 Xd 1 2 3 4 5 6 7 8 9 0.288 0.172 0.121 0.086 0.075 0.072 0.073 0.053 0.060 > T=rbind(benford,-table(Xd)/1000) > barplot(T,col=c("red","white"),ylim=c(-.045,.3)) > abline(h=0)
Clearly, it not far away from Benford’s law. Perhaps a more formal test can be considered, for instance Pearson’s (goodness of fit) test.
> chisq.test(T,p=benford) Chi-squared test for given probabilities data: T X-squared = 10.9976, df = 8, p-value = 0.2018
So yes, Benford’s law is admissible ! Now, if we consider the case where is smaller (say 0.9), it is a rather different story,
compared with the case where is larger (say 1.1)
It is possible to generate several samples (always the same size, here 1,000 observations), just change the variance parameter and compute the -value of the test. There might be one tricky part: when generating samples from lognormal distributions with small variance, it might be possible that some digits do not appear at all. On that case, there is a problem with the test. So we just use here
> T=table(Xd) > T=T[as.character(1:9)] > T[is.na(T)]=0 > PVAL[i]=chisq.test(T,p=benford)$p.value
Boxplots of the -value of the test are the following,
When is too small, it is clearly not Benford’s distribution: for half (or more) of our samples, the -value is lower than 5%. On the other hand, when is large (enough), Benford’s distribution is the distribution of the first digit of lognormal samples, since 95% of our samples have -values higher than 5% (and the distribution of the -value is almost uniform on the unit interval). Here is the proportion of samples where the -value was lower than 5% (on 5,000 generations each time)
Note that it is also possible to compute the -value of Komogorov-Smirnov test, testing if the -value has a uniform distribution,
> ks.test(PVAL[,s], "punif")$p.value
Indeed, if is larger than 1.15 (around that value), it looks like Benford’s law is a suitable distribution for the first digit.