Tag Archives: Smirnov

Monte Carlo techniques to create counterfactuals

In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .

There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.

I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height[Davis$sex=="M"]

We can visualize its distribution (density and cumulative distribution)

u=seq(155,205,by=.5)
par(mfrow=c(1,2))
hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
lines(u,dnorm(u,178,6.5),col="black")
Xs=sort(X)
n=length(X)
p=(1:n)/(n+1)
plot(Xs,p,type="s",col="blue")
lines(u,pnorm(u,178,6.5),col="black")

Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals

hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
  Y=rnorm(n,178,6.5)
  hist(Y,col=rgb(1,0,0,.3))
  lines(density(Y),col="red",lwd=2)
Ys=sort(Y)
plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205))
polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA)
lines(Xs,p,type="s",col="blue",lwd=2)
lines(Ys,p,type="s",col="red",lwd=2)

We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.

d=rep(NA,1e5)
for(s in 1:1e5){
d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic
}
ds=density(d)
plot(ds,xlab="",ylab="")
dks=ks.test(X,"pnorm",178,6.5)$statistic
id=which(ds$x>dks)
polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA)
abline(v=dks,col="red")

If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed

mean(d>dks)
[1] 0.78248

is the computational version of the p-value

ks.test(X,"pnorm",178,6.5)
 
	One-sample Kolmogorov-Smirnov test
 
data:  X
D = 0.068182, p-value = 0.8079
alternative hypothesis: two-sided

I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).

Tests non-paramétriques et simulations

Lors du dernier cours de statistique, nous avons présenter les tests d’ajustment de lois. Nous avions illustré ces tests à partir de la taille d’individus (déjà utilisé pour présenter l’ajustement de lois, et l’estimation de densité), correspondant à https://latex.codecogs.com/gif.latex?\boldsymbol{x}=\{x_1,\cdots,x_n\}.

> Davis=read.table(
+ "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
> Davis[12,c(2,3)]=Davis[12,c(3,2)]
> X=Davis$height
> n=length(X)

On notera https://latex.codecogs.com/gif.latex?%20(x_{i:n}) la statistique d’ordre, au sens où

https://latex.codecogs.com/gif.latex?%20x_{1:n}\leq%20x_{2:n}\leq\cdots\leq%20x_{n-1:n}\leq%20x_{n:n}

Parmi les outils graphiques, nous avons vu le PP plot (graphique probabilité-probabilité) et le QQ plot (graphique quantile). Le code pour créer un PP plot peut être le suivant

> PP=function(Y,F=pnorm){
+   n=length(Y)
+   x=F(sort(Y))
+   y=seq(1/n/2,1-1/n/2,by=1/n)
+   return(data.frame(x=x,y=y))
+ }

qui représente (à un détail près) le nuage de points

https://latex.codecogs.com/gif.latex?\left\{F_0(x_{i:n}),\frac{i}{n}\right\}

et celui pour le QQ plot serait

> QQ=function(Y,Q=qnorm){
+   n=length(Y)
+   x=Q(seq(1/n/2,1-1/n/2,by=1/n))
+   y=sort(Y)
+   return(data.frame(x=x,y=y))
+ }

qui représente (toujours à un détail près) le nuage de points

https://latex.codecogs.com/gif.latex?%20\left\{F_0^{-1}\left(\frac{i}{n}\right),x_{i:n}\right\}

où https://latex.codecogs.com/gif.latex?%20F_0 est la loi que l’on cherche à tester, au sens où on a https://latex.codecogs.com/gif.latex?%20H_0:F=F_0 avec comme hypothèse alternative https://latex.codecogs.com/gif.latex?%20H_1:F\neq%20F_0.

Continue reading Tests non-paramétriques et simulations

Maximum Likelihood versus Goodness of Fit

Thursday, I got an interesting question from a colleague of mine (JP). I mean, the way I understood the question turned out to be a nice puzzle (but I have to confess I might have misunderstood). The question is the following : consider a i.i.d. sample https://latex.codecogs.com/gif.latex?\{X_1,\cdots,X_n\} of continuous variables. We would like to choose between two (parametric) families for the distribution, https://latex.codecogs.com/gif.latex?\mathcal{F}=\{F_{\boldsymbol%20\alpha};\boldsymbol%20\alpha\in\mathcal{A}\} and https://latex.codecogs.com/gif.latex?\mathcal{G}=\{G_{\boldsymbol%20\beta};\boldsymbol%20\beta\in\mathcal{B}\}. If we use maximum likelihood techniques, we get two estimators, one for each family, https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol%20\alpha} and https://latex.codecogs.com/gif.latex?\widehat{\boldsymbol%20\beta}. Clearly, https://latex.codecogs.com/gif.latex?F_{\widehat{\boldsymbol%20\alpha}}(\cdot) is a much better than https://latex.codecogs.com/gif.latex?G_{\widehat{\boldsymbol%20\beta}}(\cdot), in the sense of a standard goodness of fit test (e.g. Kolmogorov-Smirnov since the sample is assumed to be obtained from a continuous variable). Does that mean that family is https://latex.codecogs.com/gif.latex?\mathcal{F} (somehow) better than family https://latex.codecogs.com/gif.latex?\mathcal{G} ?

This is my interpretation of the question, and I found it amusing. So I will try to show (using simulated samples) that some odd situations can easily be obtained.

Consider a sample from a mixture of log-normal distributions,

>  set.seed(228)
>  X=exp(c(rnorm(50,1,1),rnorm(50,2,1.2)))

Consider two standard families for positive random variables: a Gamma distribution and a lognormal distribution.

> library(MASS)
> ab=fitdistr(X,"gamma")
> ms=fitdistr(X,"lognormal")

If we want to visualized those two distributions, let us use

> vab=pgamma(u,ab$estimate[1],ab$estimate[2])
> vms=plnorm(u,ms$estimate[1],ms$estimate[2])
> plot(ecdf(X))
> lines(u,vab,col="red")
> lines(u,vms,col="blue")

Here, we get

What else can we say ? actually, we can also compute Kolmogorov-Smirnov statistic,

https://latex.codecogs.com/gif.latex?D_n=\sup_x%20|\widehat%20F_n(x)-F_\star(x)|where

https://latex.codecogs.com/gif.latex?\widehat%20F_n(x)={1%20\over%20n}\sum_{i=1}^n%20\boldsymbol{1}_{X_i\leq%20x}

This can be done using

> ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])

One-sample Kolmogorov-Smirnov test

data:  X
D = 0.0693, p-value = 0.7231
alternative hypothesis: two-sided

> ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])

One-sample Kolmogorov-Smirnov test

data:  X
D = 0.148, p-value = 0.02507
alternative hypothesis: two-sided

From a theoretical point of view, we should not look at the p-values, since the null-distribution is based on a fixed distribution, not a fitted one (see the Lilliefors tests for normal samples). But still. The Gamma distribution seems to be very far away from the true distribution. The statistics is twice the one we have with our lognormal distribution. And one p-value is 72%, while the other one is 2.5%. Here, we should prefer this lognormal distribution to that Gamma one. But here, we did consider only one distribution in each family. Does that mean that we cannot find one Gamma distribution that will be better than all possible lognormal distributions ? Better, for instance, according to Kolmogorov-Smirnov statistics…

Well, it is possible to use another strategy to find appropriate parameters. We can minimize this statistic actually. Define

> ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic}
> ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}

and compute

> n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2]))
> n1
$minimum
[1] 0.05252692

$estimate
[1] 1.547437 1.121864
> n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2]))
> n2
$minimum
[1] 0.04737725

$estimate
[1] 1.1449041 0.167041

So here, it is possible to find a distribution much closer to the empirical sample, within the Gamma family actually.

>  vab=pgamma(u,n2$estimate[1],n2$estimate[2])
>  vms=plnorm(u,n1$estimate[1],n1$estimate[2])
>  lines(u,vab,col="red",lwd=2)
>  lines(u,vms,col="blue",lwd=2)

What would be the point here? Maybe just the idea that the maximum likelihood estimator is only one estimator among a lot of them. And if it has interesting asymptotic properties, on small samples, it might not be the best estimator to consider…

And to be completely honest, I’ve been cheating here… I mean, not really cheating (not more than any researcher using a statistical test to validate the findings). But here, I did fix the seed of the random number generator. Actually, such example does not occur that frequently. Here, out of 1000 samples, I got this odd conclusion almost 15 times. And the smaller the sample, the more likely we can observe that story, where the maximum likelihood estimator can be far away from the best fit. Here is the proportion of opposite conclusions, as a function of the sample size,

> SIM=function(ns=1000,n=100){
+ t=0
+ for(s in 1:ns){
+  set.seed(s)
+  X=exp(c(rnorm(n/2,1,1),rnorm(n/2,2,1.2)))
+  ks1=function(ms) {m=ms[1];s=ms[2];ks.test(X,"plnorm",m,s)$statistic}
+  ks2=function(ab) {a=ab[1];b=ab[2];ks.test(X,"pgamma",a,b)$statistic}
+  library(MASS)
+  ab=fitdistr(X,"gamma")
+  ms=fitdistr(X,"lognormal")
+  n1=nlm(ks1,c(ms$estimate[1],ms$estimate[2]))
+  n2=nlm(ks2,c(ab$estimate[1],ab$estimate[2]))
+  if( (ks.test(X,"plnorm",ms$estimate[1],ms$estimate[2])$statistic-
+  ks.test(X,"pgamma",ab$estimate[1],ab$estimate[2])$statistic)
+ *(n1$minimum-n2$minimum)<=0 ) t=t+1
+ }
+ return(t/ns)}

> VM=rep(NA,20)
> VS=seq(10,200,by=10)
> for(i in 1:20){VM[i]=SIM(n=VS[i],ns=1000)}
> plot(VS,VM,type="p")

So to provide a more complete answer to JP’s question, with a very large sample, I guess that your intuition should be valid…. but clearly not on a small sample.