Tag Archives: Wigner

On Wigner’s law (and the semi-circle)

There is something that I love about mathematics: sometimes, you discover – by chance – a law. It has always been there, it might have been well known by some people (specialized in some given field), but you did not know it. And then, you discover it, and you start wondering how comes you never heard about it before… I experienced that feeling this evening, while working on the syallbus for my course on copulas and extreme values. I discovered the so-called Wigner’s Semicircle Law (see e.g. Fan Zhang’s notes, or Fraydoun Rezakhanlou’s notes on that topic). Consider some  random matrice, with  large (say 100) where elements are centered, such as a collection of random variable taking value  with equal probability. Then, eigenvalues can be visualized below

n=100
M=matrix(sample(c(-1,1),size=n*n,replace=TRUE),n,n)
E=eigen(M)$values
plot(E,xlim=c(-11,11),ylim=c(-11,11))

Consider the symmetric matrix obtained from that matrix,

and more precisely, let us look at its eigenvalues,

E=eigen(.5*(M+t(M)))$values

Then the distribution of those eigenvalues is the so-called semi-circle distribution

hist(E/sqrt(2*n),probability=TRUE,col=CL[4],xlab="",ylab="",
main="",border="white",xlim=c(-1.2,1.2),ylim=c(0,.65))
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)*2/pi
lines(u,v,col=CL[6],lwd=2)

Now, if we consider some  distribution, instead of our binomial one, we got exactly the same

M=matrix(rnorm(n*n),n,n)
E=eigen(M)$values
plot(E,xlim=c(-11,11),ylim=c(-11,11))
E=eigen(.5*(M+t(M)))$values
hist(E/sqrt(2*n),probability=TRUE,col=CL[4],xlab="",ylab="",
main="",border="white",,xlim=c(-1.2,1.2),ylim=c(0,.65))
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)*2/pi
lines(u,v,col=CL[6],lwd=2)

Actually, it is a very general result, see the second chapter of an Introduction to Random Matrices by Greg Anderson, Alice Guionnet and Ofer Zeitouni, for instance. If entries of the random matrix are independent centred random variables, symmetric, such that higher moments exist, then this property is valid. That’s awesome, isn’t it? Because if the distribution has too heavy tails, then this property is no longer valid. For instance, if we consider a random matrix where entries have a Student distribution, we get something different…

M=matrix(rt(n*n,df=2.1),n,n)
M=M/sd(M)
E=eigen(M)$values
plot(E,xlim=c(-11,11),ylim=c(-11,11))
E=eigen(.5*(M+t(M)))$values
hist(E/sqrt(2*n),probability=TRUE,col=CL[4],xlab="",ylab="",
main="",border="white",,xlim=c(-1.2,1.2),ylim=c(0,.65))
u=seq(-1,1,by=.01)
v=sqrt(1-u^2)*2/pi
lines(u,v,col=CL[6],lwd=2)

(here, I do normalize by the standard deviation to get something comparable with the previous graph, where variables were centered, with unit variance)

and if we consider a distribution with infinite variance, we get

M=matrix(rt(n*n,df=1.75),n,n)

we get

I guess I will get back on that property in my course!