Quelques jours à Rennes

J’ai quitté Montréal hier soir, juste après le cours…

… et je serais de passage à Rennes pour quelques jours, pendant toute la semaine. Au programme, quatre jours de travail avec des co-auteurs, place Hoche, histoire de finir des papiers qui traînaient depuis (beaucoup) trop longtemps. Promis, j’en reparlerais dès que les documents seront en ligne ! En attendant, je sais retrouver la ville pendant quelques jours, et je compte en profiter autant que possible !

Statistical Interests in Large Cities

I always thought that there were some kind of schools in statistics, areas (not to say universities or laboratories) where people had common interest in term of statistical methodology. Like people with strong interest in extreme values, or in Lévy Processes. I wanted to check this point so I did extract information about articles puslished in about 35 journals in statistics, probability and econometrics. I got all the information in files extracted from http://scopus.com/

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> z=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ z=c(z,as.character(B$Source.title))
+ }

Here is the list of the publications I have used

> Z=sort(table(z),decreasing=TRUE)
> Z[1:34]
                                 Computational Statistics and Data Analysis 
                                                                       4000 
                                           Journal of Multivariate Analysis 
                                                                       4000 
                                                         Econometric Theory 
                                                                       2631 
                                              Annals of Applied Probability 
                                                                       2051 
                                                             Bioinformatics 
                                                                       2000 
                                                                 Biometrika 
                                                                       2000 
                                                    Journal of Econometrics 
                                                                       2000 
                              Journal of Statistical Planning and Inference 
                                                                       2000 
                            Journal of the American Statistical Association 
                                                                       2000 
                                                        Operations Research 
                                                                       2000 
                                                        Pattern Recognition 
                                                                       2000 
                                      Probability Theory and Related Fields 
                                                                       2000 
                                                          Signal Processing 
                                                                       2000 
                                             Journal of Applied Probability 
                                                                       1999 
                                Stochastic Processes and their Applications 
                                                                       1999 
                         Annals of the Institute of Statistical Mathematics 
                                                                       1985 
                                                       Annals of Statistics 
                                                                       1797 
                                                              Technometrics 
                                                                       1446 
                                       Journal of Machine Learning Research 
                                                                       1441 
                                                              Biostatistics 
                                                                       1120 
                                         Statistics and Probability Letters 
                                                                       1062 
                                                      Annals of Probability 
                                                                       1054 
                                                   Statistics and Computing 
                                                                        927 
                                            Advances in Applied Probability 
                                                                        895 
                                        Journal of Nonparametric Statistics 
                                                                        836 
                                                   Computational Statistics 
                                                                        813 
                                            Journal of Time Series Analysis 
                                                                        811 
                          Journal of Computational and Graphical Statistics 
                                                                        802 
     Journal of the Royal Statistical Society. Series C: Applied Statistics 
                                                                        794 
Journal of the Royal Statistical Society. Series B: Statistical Methodology 
                                                                        793 
                                                                 Biometrics 
                                                                        784 
                                                           Machine Learning 
                                                                        559 
                                                  SIAM Journal on Computing 
                                                                        433 
                                     International Journal of Biostatistics 
                                                                        368

The first problem is that is it difficult to extract universities and locations of contributors. When you look at what we have in the dataset, here it is

> B$Authors.with.affiliations[1]
[1] Mischler, S., CEREMADE, UMR CNRS 7534, Universit\303\251 Paris-Dauphine, Place du 
Mar\303\251chal de Lattre de Tassigny, Paris Cedex 16, 75775, France; Mouhot, C., DPMMS,
Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, 
CB3 0WA, United Kingdom; Wennberg, B., Department of Mathematical Sciences, Chalmers 
University of Technology, G\303\266teborg, Sweden, Department of Mathematical Sciences, 
University of Gothenburg, G\303\266teborg, 41296, Sweden

The first step was to split all that sentence, based on the comma operator

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> v=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ v=c(v,x2[[1]])}
+ }

I have a long  long vector here. Which contains a lot of things !

> V=sort(table(v),decreasing=TRUE)
> names(V)[1:40]
 [1] " United States"                           
 [2] " Department of Statistics"                
 [3] " Department of Mathematics"               
 [4] " M."                                      
 [5] " J."                                      
 [6] " A."                                      
 [7] " S."                                      
 [8] " United Kingdom"                          
 [9] " France"                                  
[10] " D."                                      
[11] " P."                                      
[12] " Y."                                      
[13] " R."                                      
[14] " China"                                   
[15] " H."                                      
[16] " Germany"                                 
[17] " Department of Economics"                 
[18] " C."                                      
[19] " G."                                      
[20] " L."                                      
[21] " Canada"                                  
[22] " T."                                      
[23] " University of California"                
[24] " Department of Biostatistics"             
[25] " F."                                      
[26] " B."                                      
[27] " Department of Mathematics and Statistics"
[28] " E."                                      
[29] " K."                                      
[30] " N."                                      
[31] " Department of Computer Science"          
[32] " Japan"                                   
[33] " Australia"                               
[34] " X."                                      
[35] " Hong Kong"                               
[36] " Italy"                                   
[37] " W."                                      
[38] " Spain"

 

A lot of useless information, for sure, but also more valuable information. Like university names,

> names(V)[c(23,50,58,59,61,66,67,72,84,87,89)]
 [1] " University of California"     " Stanford University"         
 [3] " Chapel Hill"                  " University of Washington"    
 [5] " Stanford"                     " University of Michigan"      
 [7] " Carnegie Mellon University"   " Columbia University"         
 [9] " Cornell University"           " University of North Carolina"
[11] " Duke University"

or cities,

> names(V)[c(35,40,41,44,45,47,51,53,54,55,56,62,64,65,
+ 70,71,82,92,97)]
 [1] " Hong Kong"    " New York"     " Berkeley"     " Cambridge"   
 [5] " Boston"       " Seattle"      " London"       " Pittsburgh"  
 [9] " Los Angeles"  " Singapore"    " Beijing"      " Philadelphia"
[13] " Ann Arbor"    " Atlanta"      " Toronto"      " Baltimore"   
[17] " Chicago"      " San Diego"    " Tokyo"

I decided to focus on 90 locations. Each time I have a string which is the same as the name of one of my 90 cities, I keep it. So if there is a Prof. Ann Arbor, I will consider that person as a city. Here is the graph of all locations, with the number of “articles“. Or contributors. If four people in San Francisco published toegher an article, the article appears four times in my dataset. I did spend some time with Cambridge, and I decided to move Cambridge, MA to Boston, MA. Just for convenience.

> require("geosphere")
> require("maps")
> data(world.cities)
> data(us.cities)
> data(canada.cities)
> LOCALIZE=Vectorize(function(v){z=findLatLon(v)$latlon;if(is.na(z)){z=c(NA,NA)};return(z)})
> CITIES=names(V)[city]
> NCITIES=substr(CITIES,2,nchar(CITIES))
> NCITIES[substr(NCITIES,1,5)=="Paris"]="Paris"
> NCITIES=unique(NCITIES)
> LC=matrix(unlist(LOCALIZE(NCITIES)),nrow=2)
> BASELOC=data.frame(CITY=NCITIES,LAT=LC[2,],LON=LC[1,])

I did spend some time on some cities, such as Paris, or London, where zip code was sometimes attached to the city name. I also had to fix some problems… But after a few minuts, I was able to locate those cities.

Then, I wanted to extract information about all publications. Keywords are interesting, but over 266,567 “publications“, it is hard to use (sometimes it is not file, somethimes it is extremely general, or extremely specialized). So I decided to extract words from the title of the contribution.

> VCITY=NULL
> VKW=NULL
> VY=NULL
> VJ=NULL
> VA=NULL
> VW=NULL
> art=0
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ art=art+1
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ listu=which(x2[[1]]%in%CITIES)
+ if(length(listu)>0){
+ C=tolower(paste(" ",as.character(B[j,"Title"]),sep=""))
+ x3=strsplit(C," ")[[1]]
+ kx3=which(!x3%in%c("a","the","of","an","in","",
+ "for","and","with","on","to","using","from","under"))
+ x3=x3[kx3]
+ J=as.character(B[j,"Source.title"])
+ Y=B[j,"Year"]
+ n1=length(listu)
+ n2=length(x3)
+ VCITY=c(VCITY,rep(x2[[1]][listu],each=n2))
+ VKW=c(VKW,rep(x3,n1))
+ VY=c(VY,rep(Y,n1*n2))
+ VJ=c(VJ,rep(J,n1*n2))
+ VA=c(VA,rep(art,n1*n2))
+ VW=c(vW,rep(1/n2,n1*n2))
+ }}}
­> BASEUNIV=data.frame(CITY=NCITIES,KEYW=VKW,YEAR=VY,JOURNAL=VJ,INDICE=VA,W=W)
Here, I got a huge dataset. One line is one city and one "word". Now, let us select one word, and let us plot how important that word is, in each city,
> Figure=function(keyword="bayesian"){
+ SBASEUNIV=BASEUNIV[BASEUNIV$KEYW==keyword,]
+ SB2=tapply(SBASEUNIV$W,SBASEUNIV$CITY,sum)
+ D=data.frame(CITY=names(SB2),CT=as.vector(SB2))
+ BASE=merge(BASELOC,D)
+ library(maps)
+ library(RColorBrewer)
+ CL=brewer.pal(6, "RdBu")
+ Y=SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
+ X=cut(Y,breaks=c(0,.5,.75,1,1.333,2,10000))
+ levels(X)=1:6
+ library(maps)
+ map("world")
+ points(BASE$LON,BASE$LAT,pch=1,col=CL[as.numeric(X)],
+ cex=sqrt(Y*20),lwd=4)
+ }
In the code above, we compare with the independent case (if cities and keywords where independent) since we normalize using
SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
For bayesian statistics (publication with the word bayesian in the title)

For nonparametric statistics (publication with the word nonparametric in the title)

For stochastic processes (publication with the word processes in the title)

(the problem here is that we cannot visualize the red circles: if in a city, no one published on a given topic, it would be strong red, but tiny, or even null… so we won’t see it). It decided to keep the top 250 words that appeaared in titles, I removed standard common words, such as it, theof, etc.

> listewords=names(sort(table(BASEUNIV$KEYW),decreasing=TRUE)[1:250])
> listewords=listewords[-c(1,2,3,4,7,15,24,42,129)]
> idx=which(BASEUNIV$KEYW%in%listemots)
> T=table(as.character(BASEUNIV$KEYW[idx]),BASEUNIV$CITY[idx])
> MATRICE=as.matrix(T)

I had a nice contingency table, with 90 cities, versus 200 words.

> library("FactoMineR")
> res.pca = PCA(t(MATRICE), scale.unit=TRUE, ncp=5, 
+ graph=FALSE)
> plot.PCA(res.pca, axes=c(1, 2), choix="ind")

Principal component analysis was disapointing,

So I decided to extract, per city, the largest contributions to the chi-square distance

> K2=chisq.test(MATRICE)
> M2=K2$expected

On the graph below, the green level is the theoretical counts of each word, under some independence assumption. The dark line is the observed one. For instance, in San Francisco, on top, we have words that were not used a lot (e.g. processes: given the total number of publications, it would make sense to have 6 or 7 publications with the word processes, but there were 0 publications actually), and below words that were intensively used. Intensively (such as method and structure, the last one was expected two or three times, but it appeared in 25 publications) compared with the other cities,

In Boston, MA,  we got

In New York City, NY

In Paris (France),

But to be honest, I was disapointed. I mean, yes, I can see on the previous graph, for instance, that there are a lot of people working on stochastic processes, with the words Brownian and Markov. But for most cases, I can hardly get an interpretation…

I tried a finaly graph, on interconnexions between authors. The first point is that it is common to have joint publications with colleagues, in the same city. The largest the point, the more joint papers,

But we can add here cross publications: the thinner the line, the less joint publications between two places,

We can see that I missed in the first part the Cambridge-Boston distinction, since Cambridge should now stand for Cambridge, UK. But the line is clearly too large to be explained here by collaboration betweem Cambridge, UK, and Boston, MA. But still. a lot of them can be explained, with Hong-Kong and Shanghai, or Mexico and Guanajuato.

If someone has better ideas to import properly the locations (or affiliations, it might be fun to focus on universities) and perhaps the abstract (more than the title), I’d be glad to try the same study in Economic journals…

Bias and MLE

Before leaving the office, this evening, JP decided to knock at my door to ask me a “quick and very basic question” (as he put it). This is JP’s stategy, and he knows it works. His question was – more or less – what do we know about the bias in maximum likelihood estimation when we have a small sample, from a Gamma distribution. He was surprised by some results he got. If I wanted to be naughty, too, I would say that he was suprised to see how long his student spent to code that in SAS. So he wanted to challenge me, and see how fast I could give him a valuable answer. Given the fact that I had to leave early because my elder son had a fencing competition, I tried to write a simple code to “visualize” the bias of the parameter (the first one) of a Gamma distribution, with MLE.

Before showing the graph, I wanted to add that I hate one thing about mathematical statistics courses: we learn nothing interesting there. I mean, we can see nice mathematical concepts, but after this class, you can hardly say anything when you see your first dataset. Like with real data. For instance, this course usually emphasize asymptotical results, using limiting theorem. When you take this course, you learn a lot of thing about maximumum likelihood for instance. You can compute the asymptotic variance and derive asymptotic confidence intervals. But are those results relevant when you have 50 observations? Is it possible, with 50 observations, to have a bias which has the same size as the parameter?

As usual, one possible answer is “if you don’t have a lot of observations, be Bayesian!“. Maybe. Someday. What I tried, here, is to run simulations to see how MLE estimators behave. Given a -i.i.d. sample, from a  distribution, let  and  denote the maximum likelihood estimators of the two parameters.

library(fitdistrplus)
maxl=function(x) fitdist(x,"gamma",method="mle")$estimate
VK=floor(exp(seq(log(5),log(200),length=25)))
V=NULL
for(k in 1:length(VK)){
n=VK[k]
N=5000
m=matrix(rgamma(n*N,1,2),n,N)
ss=apply(m,2,maxl)
V=rbind(V,ss)}
y=as.vector(V[seq(1,length(VK)*2,by=2),])
x=rep(c(VK),ncol(V))
boxplot(y~x,
xlab="Nb. observations (log scale)",ylim=c(0,6))
abline(h=1,lty=2,col="blue")

Here, in our simulations, the shape parameter was 1. On the graph, we have boxplots of   obtained on several scenarios. We clearly see the positive bias of the MLE. And the bias reduces with  (as expected, since the MLE is asymptotically unbiased). We can also visualize the distribution of    (instead of boxplots)

It is also possible to derive analytical results. David Cox and Joyce Snell did the maths in 1968 and actually did obtain analytical expressions for the biases. More recently, David Giles (a.k.a. @deagiles on Twitter) and Hui Feng did look at the behavior of bias-adjusted estimators, a few years ago. For instance, one can get that

where

 being the so-called digamma function,

and where  and  are the first and second order derivatives, see e.g. Bowman and Shenton (1982) – yes, there is an book on the topic of estimating parameters of the Gamma distribution…

Observe that the bias of   does not depend on , while the bias of   will depend on .

d1digamma=function(x,h=1e-7)
return((digamma(x+h)-digamma(x-h))/(2*h))
d2digamma=function(x,h=1e-7)
return((d1digamma(x+h)-d1digamma(x-h))/(2*h))
biasalpha=function(a,n){
return((a*d1digamma(a)-a^2*d2digamma(a)
-2)/(2*n*(a*d1digamma(a)-1)^2))
}

The way I compute it is probably not optimal, so if you want to improve it, please, go ahead ! If we compare the average bias obtained on our simulation, and the one obtained this first order approximation, we get

m=apply(V,1,mean)
plot(VK,m[seq(1,length(VK)*2,by=2)],type="b",col="red",xlab="Nb. observations (log scale)",log="x")
abline(h=1,lty=2,col="blue")
B=Vectorize(function(n) biasalpha(a=1,n))(1:200)
lines(1:200,B+1,col="orange")

Observe here that neglecting the  factor yield an underestimation of the real biais… Fun, isn’t it?

Wasting Time (and Givin’ Up)

There was an interesting post, published a few days ago, entitled This Blog is a Waste of My Time. The funny thing is that I had exactly the same experience at the same time. Since 2013 ended, I wanted to update my resume. And I observed that I got zero publications in the past two years years. ZeroNada. Nothing published in 2012 and nothing published in 2013. Of course, it is mainly a timing issue, since several papers are still in the loop, and I might end up with a few papers published in 2014 (at least one was accepted the first week of 2014). But still… When I decided to turn off my laptop yesterday evening at 2 a.m. (this morning actually) I started wondering also if blogging wasn’t a waste of time. Or if it was something else.

  • My Research is a Waste of My Time (and not only mine)

This will sound like a cliché, but academic do waste a lot of time when doing (or pretending doing) some research,

  1. wasting time applying for grants: do I have to be more specific here? By wasting time I mean working during a month (almost) to fill forms, and there then get a “your proposal is extremely interesting, you got positive feedback from the reviewers, unfortunately, there’s no funding from the government…“. We all had this experience. We’re wasting our time here… and the reviewers time, too.
  2. wasting time in committees: as mentioned above, I have to spend time in research committees, reading applications for grants, but also in faculty committed, discussing office allocation for instance. We got more postdocs, visitors, interns than available seats… and for some reasons, I am on the bargaining comity, trying to argue with my colleague that my postdoc staying for 6 weeks should be before his visitor staying for 2 weeks on the priority list. I do have to do this, but you have to admit that, somehow, you waste the time of four tenured professor (plus me, I am not tenured) on some bullshit here… I am also on the PhD comity, where we receive applications. In December, we did spend a lot of time on the application of one candidate, potentially interesting (like many hours, discussing and arguing), and we’re not even sure that if we agree to enroll him in the PhD program, the candidate will join. If he’s not coming, that will be a waste of time
  3. wasting time in the review process: it looks like I spent more time reading and writing reports on others papers than writing my own ! Ok, that might be an actual quote I got from one of my referees in a recent paper… I keep saying that I should start saying “no, I am too busy” when an editor ask me for a review. But I also keep saying that a lot of bullshit managed to get published. So I cannot stand aside and wait. I mean, I could: I’m French and we’re usually good for this kind of things. But I’d rather be involved in the process, advise the editor if the paper is not worth it, and help to improve the paper if it might be interesting. But again, I spend a lot of time in this process. I know what others are doing, but I keep delaying my own. You cannot find my name if you look for articles published in 2012 or 2013, but I am somewhere, as one of those anonymous referees thanked at the end of the article (who sometimes spent more time on the paper than the PhD supervisor who barely knows what the paper is about, but still has his – or her – name on it). There was an interesting post by Rob Hyndman entitled How to get your paper rejected quickly a few weeks ago. I still don’t know if I agree with everything, but I agree that “review­ers spend a great deal of time pro­vid­ing com­ments, and it is dis­re­spect­ful to ignore them” (I would say “might spend“, but I do not want to argue on that point today). A lot of time is wasted in the publication process.
  4. wasting time trying to get data: before Julie started her internship in September, I tried to get datasets to work on demographic problems. I started discussing (and filling) forms to get French datasets, and managed to get a smaller in Québec. The agenda was simple: we work on the small dataset, write the code, and then, once we’ll get the big dataset, we’ll just use the code that we tested on the small one. After 6 months, I still wonder if my form has been accepted, and if, someday, I will be able to get access to this dataset. I know that the dataset exists. I mean, I know that two datasets exist, and I just ask for a merge… but it looks like there might be ethical considerations, so it takes time.

I do waste a lot of time in the process of making research, and I do not mention here procrastination. Actually, I believe that procrastination is extremely important, and is not a waste of time… But I will get back, someday, on that point in another post.

  • My Teaching Related Duties are a Waste of My Time

I will not claim that teaching is a waste of time. I am still extremely pretentious, and I believe that by the end of my courses, my students could actually learn something… But the problem is more on associated duties. One might think of writing the exams (and sketches of solutions) or grading (since I do not have T.A.s to help). It takes time. A lot of time actually. But I won’t consider it as wasted. Two shorts stories to explain what I mean (that occurred in the Autumn session)

  1. in September, I gave a course, and there were 4 tests scheduled. A few hours before the first one, I got an email from a student, asking me to reschedule it because he could not be there. He asked me to postpone his examen a few days after. I said no, essentially because we signed an agreement on the first day, and the student knew by that time that he will not be able to be there for the exam. And never told me before. I decided to stand on this principle. The thing is the student invoked religious matters, and I understood it will start to be stinky soon. But I had principles. I got some moral support from my colleagues, and from my Dean, but everyone was telling me that I was in charge in this battle (“we support you, but you’re on your own“) since we have our academic independence. I did ask for legal backup from the Professors Union (three times) and no feedback. Then, I heard that a letter had been sent to the rector by a lawyer, and within 10 minutes, I gave the student everything he asked. If he asked me to take the test on a Sunday, I would have said yes… Just because lawyers basic rule is to waste others time, or money. So I gave up. I did not want to waste my time on that battle, on my own. The funny (?) side of this story is that so did the student: I agreed to postpone the test at the end of the session, he came for the second test (but never show up in my class) and got a little bit more than 30%. I did not get further news from him, and he did not take the other tests. But I did waste quite some time, and some bad nights and insomnia, too.
  2. in December, I was grading some homework I gave to my students (practical, on databases) and I saw on two forums that a pair of students was asking for help. Actually, it was not help but could you please do this for me ? They did mention the number of their database (each group had a different database, and the person who posted the question in those forums was located in Montréal). This was fraud, or at least fraud attempt. So I gave them 0% – on that specific homework. Students confessed that they did ask for help on the forum (but never asked me anything)… and I gave up. I mean, I decided to grade their work, and I did fill a form for fraud, sent to the faculty, so that it will be someone else’s problem. I did not want to spend time arguing that those students clearly should not get the exam (one of them had only 20% at the final written exam – the other one 36%) : if they want to learn something, taking the course in the Winter session was clearly an opportunity to learn something… But they din’t get it, and I gave up.

I clearly waste time on a lot of things. But when I look back at the past four or five years, I might feel ashamed not to have more prestigious (somehow) publications, lectures notes without typos everywhere, but at least, the blog is something I am still proud of, sort of. And when I end up working, tired, around 2 a.m., I have the feeling that something is wrong, and that a lot of time has been wasted. And I have to confess that I think I should give up on something… But I don’t think it will be on my blogging activity.

Sequences defined using a Linear Recurrence

In the introduction to the time series course (MAT8181) this morning, we did spend some time on the expression of (deterministic) sequences defined using a linear recurence (we will need that later on, so I wanted to make sure that those results were familiar to everyone).

  • First order recurence

The most simple case is the first order recurence, https://latex.codecogs.com/gif.latex?u_n=a+b%20u_{n-1} where https://latex.codecogs.com/gif.latex?b\neq%201 (for convenience). Observe that we can remove the constant, using a simple translation https://latex.codecogs.com/gif.latex?\underbrace{[u_n-m]}_{v_n}%20=%20b%20\underbrace{[u_{n-1}-m]}_{v_{n-1}} if https://latex.codecogs.com/gif.latex?%20m=a/(1-b). So, starting from this point, we will always remove the constant in the recurent equation. Thus, https://latex.codecogs.com/gif.latex?{v_n}%20=%20b{v_{n-1}}. From this equation, observe that https://latex.codecogs.com/gif.latex?{v_n}%20=%20b^n{v_{0}}, which is the general expression of https://latex.codecogs.com/gif.latex?{v_n}.

  • Second order recurence

Consider now a second order recurence, https://latex.codecogs.com/gif.latex?{v_n}%20=%20a{v_{n-1}}+b{v_{n-2}}. In order to find the general expression of https://latex.codecogs.com/gif.latex?{v_n}, define https://latex.codecogs.com/gif.latex?\boldsymbol{V}_n%20=(v_{n}},{v_{n-1}})^{\sffamily%20T}. Then https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}a&%20b%20\\%201%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} This time, we have a vectorial linear recurent equation. But what we’ve done previously still holds. For instance, https://latex.codecogs.com/gif.latex?%20{\boldsymbol{V}_n%20}=B{\boldsymbol{V}_{n-1}%20}=\cdots=B^n\boldsymbol{V}_{0} What could we say about https://latex.codecogs.com/gif.latex?%20B^n ? If https://latex.codecogs.com/gif.latex?B can be diagonalized, then https://latex.codecogs.com/gif.latex?%20B=P\Delta%20P^{-1} and https://latex.codecogs.com/gif.latex?%20B^n=P\Delta^n%20P^{-1}. Thus, https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20B^n%20\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20}=%20P\underbrace{\begin{bmatrix}\lambda_1^n&%200%20\\%200%20&%20\lambda_2^n\end{bmatrix}}_{\Delta^n}%20P^{-1}\underbrace{\begin{bmatrix}v_{0}%20\\v_{-1}%20\end{bmatrix}%20}_{\boldsymbol{V}_{0}%20} so what we’ll get here is something likehttps://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n for some constant https://latex.codecogs.com/gif.latex?%20\alpha and https://latex.codecogs.com/gif.latex?%20\beta. Recall that https://latex.codecogs.com/gif.latex?\lambda_1 and https://latex.codecogs.com/gif.latex?\lambda_2 are the eigenvalues of matrix https://latex.codecogs.com/gif.latex?B, and they are also the roots of the characteristic polynomial https://latex.codecogs.com/gif.latex?%20P(x)=x^2%20-%20ax%20-%20b. Since https://latex.codecogs.com/gif.latex?%20a and https://latex.codecogs.com/gif.latex?%20b are real-valued, there are two roots for the polynomial, possibly identical, possibly complex (but then conjugate). An interesting case is obtained when the roots are https://latex.codecogs.com/gif.latex?%20re^{\pm%20i\theta}. In that case https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(\alpha\cos(n\theta)%20+%20\beta\sin(n\theta)) To visualize this general term, consider the following code. A first strategy is to define the sequence, given the two parameters, and two starting values. E.g.

> a=.5
> b=-.9
> u1=1; u0=1

Then, we iterate to generate the sequence,

> v=c(u1,u0)
> while(length(v)<100) v=c(a*v[1]+b*v[2],v)
> plot(0:99,rev(v))

It is also possible to use the generic expression we’ve just seen. Here, the roots of the characteristic polynomial are

> r=polyroot(c(-b, -a, 1))
> r
[1] 0.25+0.9151503i 0.25-0.9151503i
> plot(r,xlim=c(-1.1,1.1),ylim=c(-1.1,1.1),pch=19,col="red")
> u=seq(-1,1,by=.01)
> lines(u,sqrt(1-u^2),lty=2)
> lines(u,-sqrt(1-u^2),lty=2)

http://freakonometrics.hypotheses.org/files/2014/01/Selection_546.png Since, https://latex.codecogs.com/gif.latex?v_n%20=%20\alpha%20\lambda_1^n%20+\beta\lambda_2^n, then https://latex.codecogs.com/gif.latex?%20\begin{cases}%20\alpha%20+%20\beta%20=%20v_0%20\\%20\alpha%20r_1%20+%20\beta%20r_2%20=%20v_1%20\end{cases} it is possible to derive numerical expressions for the two parameters. If https://latex.codecogs.com/gif.latex?%20v_n%20=r^n(A\cos(n\theta)%20+%20B\sin(n\theta)), then https://latex.codecogs.com/gif.latex?A=\lambda+\mu while https://latex.codecogs.com/gif.latex?B=i(\lambda-\mu). Thus,

> A=sum(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))
> B=diff(solve(matrix(c(1,r[1],1,r[2]),2,2),c(u0,u1)))* complex(real=0,imaginary=1)

We can plot the sequence of points

> plot(0:99,rev(v))

and then we can also plot the sine wave, too

> t=seq(0,100,by=.1)
> bv=function(t) Mod(r)[1]^t
> fv=function(t) Mod(r)[1]^t*(A*cos(t*Arg(r)[1])+B*sin(t*Arg(r)[1]))
> lines(t,Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,-Vectorize(bv)(t-1),col="red",lty=2)
> lines(t,Vectorize(fv)(t-1),col="blue")

We will see a lot of graph like this in the course, when looking at autocorrelation functions.

  • Higher order recurence

More generally, we can write https://latex.codecogs.com/gif.latex?%20\underbrace{\begin{bmatrix}v_n\\v_{n-1}\\v_{n-2}\\%20\vdots%20\\%20v_{n-p+1}%20\end{bmatrix}%20}_{\boldsymbol{V}_n%20}=%20\underbrace{\begin{bmatrix}b_{1}%20&%20b_{2}%20&b_3&%20\cdots%20&%20b_{p}%20\\%201%20&%200%20&%200&%20\cdots%20&0\\%200%20&%201%20&%200&%20\cdots%20&0\\%20\vdots%20&%20\vdots%20&%20\vdots%20&%20\ddots%20&%20\vdots%20\\%200%20&%200%20&%200&%20\cdots%20&%200\end{bmatrix}}_B\underbrace{\begin{bmatrix}v_{n-1}%20\\v_{n-2}\\v_{n-3}%20\\%20\vdots%20\\%20v_{n-p}%20\end{bmatrix}%20}_{\boldsymbol{V}_{n-1}%20} The matrix is a so called companion matrix. And similar results could be obtained for the expression of the general term of the sequence. If all that is not familar, I strongly recommand to read carefully a textbook on sequences and linear recurence.

Central Limit Theorem

This week, in the MAT8595 course, before proving Fisher-Tippett theorem, we will get back on the proof of the Central Limit Theorem, and the class of stable distribution (in Lévy’s sense). In order to illustrate the problem of heavy tails, on the behavior of the mean, consider a sequence of i.i.d. Gaussian random variables https://latex.codecogs.com/gif.latex?X_i‘s. On top, we visualize the sequence, and below, we visualize the associate random walk

https://latex.codecogs.com/gif.latex?S_n=\sum_{i=1}^n%20X_i

(the central limit theorem will give a limiting distribution for https://latex.codecogs.com/gif.latex?n^{-1}S_n in the case where the variance of the https://latex.codecogs.com/gif.latex?X_i‘s is finite)

If we consider a sequence of i.i.d. random variables https://latex.codecogs.com/gif.latex?X_i‘s whith heavier tails (possibly with infinite variance), we can still define https://latex.codecogs.com/gif.latex?S_n, but as we can see below, https://latex.codecogs.com/gif.latex?S_n can be quite heratic.

As we will see this Thursday, the key to derive stable distribution for the central limit theorem, or possible limiting distributions for the maximum is Cauchy’s function equation. I strongly recommand to look at the proof.

Copules et valeurs extrêmes, syllabus

Le plan de cours pour le cours MAT8595 Copules et Valeurs Extremes est en ligne. L’entente d’évaluation sera signée au premier cours, ce lundi à 9:00 (salle SH-2140). D’autre billets seront mis en ligne dans les jours à venir, avec quelques exercices, et les articles qui serviront de base pour les projets, sur http://freakonometrics.hypotheses.org/courses/copulas-and-extremes.

Jimmy, Mile End, et le Québec

Suite à mon billet sur Paul et le Québec, j’avais commencé une discussion (par courriel) avec Antoine, en France, qui voulait plus de références de bande dessinées québécoises. Or justement, pendant les fêtes je me suis offert Non-Aventures de Jimmy Beaulieu. Jimmy Beaulieu a fait plusieurs livres disons… sensuels, ces derniers temps, en plus de travailler sur Magasin Général, de Régis Loisel et Jean-Louis Tripp (je ne parlerais pas, pour l’instant, de cette série car si j’ai grand plaisir à la lire, je me méfie de Régis Loisel qui ne sait décidément pas clore ses histoires… j’ai adoré le premier cycle de la Quête de l’oiseau du temps, et sa superbe adaptation de Peter Pan, mais je n’ai pas aimé les ‘suites’. Mais on en reparlera peut-être un jour).

Dans Non-Aventures, on retrouve plusieurs (petites) histoires, détachées les unes des autres, avec comme seul point commun le fait d’être (comme le dit le sous titre du livre, des planches à la première personne). On retrouve ici quelques rééditions de vieux publications épuisées (ou presques) comme Le moral des troupes, ou Quelques pelures, mais aussi (me semble-t-il) beaucoup de dessins nouveaux.

Pourquoi j’en parle ici ? peut-être pour finir ma série sur la BD québécoise commence avec Guy Delisle et Michel Rabagliati. Avec Jimmy Beaulieu, on continue dans la BD québécoise autobiographique. Mais au lieu d’être de l’auto-dérision comme Guy Delisle ou nostalgique comme Michel Rabagliati, on est ici dans une mise en scène du quotidien. A Montréal (le plus souvent). Et on le vit avec lui… On se retrouve souvent dans plusieurs scène. J’avoue avoir beaucoup aimé sa description des balcons de Montréal (qui me semble très juste). Le livre de Jimmy Beaulieu est très beau, et souvent très sensuel…. Sinon, pour découvrir la vie de Montréal, je ne peux que recommander un livre paru voilà deux ans, Mile End, de Michel Hellman truffé de petites anecdotes. Si on aime déambuler dans Mile End, on sera davantage touché par certaines histoires, mais il suffit de vivre dans un quartier qui a une vie de quartier, précisément, pour apprécier le livre. Il y a par exemple cette délicieuse réflexion sur l’histoire des poteaux électriques, dans les rues (que l’on peut lire sur http://editionspowpow.com/bandes-dessinees/mile-end/…).

Sur la forme… on a l’impression de lire du Trondheim, et c’est toujours agréable ! Deux jolies bandes dessinées pour découvrir le Québec. Et promis, la prochaine fois, je parle d’économie ou de sciences… de choses plus sérieuses quoi !

Multivariate Archimax copulas

Our paper, written jointly also with Anne-Laure Fougères, Christian Genest and Johanna Nešlehová, entitled Multivariate Archimax Copulas, should appear some day in the Journal of Multivariate Analysis.

A multivariate extension of the bivariate class of Archimax copulas was recently proposed by Mesiar & Jagr (2013), who asked under which conditions it holds. This paper answers their question and provides a stochastic representation of multivariate Archimax copulas. A few basic properties of these copulas are explored, including their minimum and maximum domains of attraction. Several non-trivial examples of multivariate Archimax copulas are also provided.

In this paper, we extend the class of Archimax copulas, introduced in dimension 2 in Bivariate Distributions with Given Extreme Value Attractor, by Philippe Capéraà, Anne-Laure Fougères and Christian Genest, inspired by some ideas mentioned in a paper published in Kybernetika a few years ago. I will try to post additional material, soon…

Informatique (sans ordinateur), partie 1

Pendant les vacances, avec mon fils, on s’est amusé à faire les premières activités proposés dans le manuel Informatique Sans Ordinateur, disponible sur http://csunplugged.org/ (où plusieurs activités complémentaires peuvent être téléchargées, mais en anglais seulement). Pour des raisons personnelles, mon fils est de plus en plus amené à utiliser l’ordinateur à l’école. Et je dois avouer que ça me gêne de la voir manipuler un outils qu’il ne maîtrise pas vraiment (je renvoie d’ailleurs à un billet de Dr Goulu qui disait la même chose au début de l’année). Que les choses soient claires : je ne prétend pas non plus maîtriser l’informatique ! Mais comme je l’ai déjà raconté sur ce blog, à son âge, je codais mes premiers jeux (souvent en recopiant des codes en BASIC), et j’ai l’impression que manipulant les ordinateurs depuis presque 30 ans m’a permis d’avoir un peu de recul, en tous les cas plus que lui. Et je dois avouer que le manque de culture en informatique de la génération entre la mienne et celle de mon fils me surprend. Bref, comme je l’avais dit dans un précédant billet, ça me dérange qu’une génération autant amenée à utiliser l’outil informatique en sache aussi peu à ce sujet. Aussi, quand j’ai découvert ce petit programme d’activités le mois dernier, j’ai eu envie de tenter l’expérience.

Au départ, je pensais proposer les activités à mon fils et à ma fille (qui a 8 ans) mais cette dernière était plongée dans des dessins quand on a débuté, et je n’ai pas réussi à la faire décrocher. Pour ceux qui veulent tenter l’expérience, il y a une logique dans les activités, et je pense qu’il serait dommage se rater les premières. Bref, je n’ai pas re-proposé à ma fille de se joindre à nous. Mais on verra aux prochaines vacances….

Le document propose une douzaine d’activités (davantage si on traîne sur http://csunplugged.org/activities). La première partie (que je vais évoquer ici) porte sur la représentation de l’information.

  • section 1 : le système binaire

La première activité est vraiment bien faite. On apprend la base 2, l’écriture en 0 et en 1, les codes ASCII, et les notions de 32 et 64 bits. Histoire d’illustrer, je vais coder un peu (sur ordinateur cette fois) pour expliquer comment ça fonctionne.

> base2=function(x,n=8){
+ Base.b=rep(0,n)
+ ndigits=(floor(logb(x,base=2))+1)
+ for(i in 1:ndigits){
+ Base.b[n-i+1]=(x%%2)
+ x=(x %/% 2)}
+ plot(0:1,0:1,xlab="",ylab="",
+ axes=FALSE, xlim=c(0,n),ylim=c(0,1),col="white")
+ for(i in 1:n){
+ polygon(i-1+c(.1,.1,.9,.9),c(.1,.9,.9,.1),lwd=2,
+ col=c("white","red")[1+(Base.b[i]==1)])}
+ return(Base.b)}

On découpe des bouts de cartons, et on les juxtapose, de manière à écrire des nombres.

Regardons par exemple le nombre 17. Rouge pour un 1, et blanc pour un 0.

> base2(17)
[1] 0 0 0 1 0 0 0 1

Le premier jeu est d’écrire quelques nombres, pour se familiariser. Ensuite, on voit que c’est même facile de faire des opérations de base. Par exemple, si on multiplie par 2, c’est facile : on décale vers la gauche, et on rajoute un carré blanc, tout à droite

> base2(17*2)
[1] 0 0 1 0 0 0 1 0

Amusant, non ? On peut aussi faire des additions, par exemple, 12 s’écrit

> base2(12)
[1] 0 0 0 0 1 1 0 0

et si on somme 12 et 17, on obtient

> base2(12+17)
[1] 0 0 0 1 1 1 0 1

où on raisonne comme en base 10. En fait, dans cet exemple, on retrouve que 0+1=1+0=1 et que 0+0=0 (oui, il n’y a pas de retenue dans cet exemple). Ensuite, on utilise une décomposition en nombre de l’alphabet pour coder des lettres (A-1, B-2, C-3, etc), puis rapidement, on passe aux codes ASCII, C’est très ludique ! Je dois avouer qu’on en a profité pour faire une digression par les codes secrets, mais j’en reparlerais une autre fois. Bref, cette première activité nous a emballé !

  • section 2 : pixeliser et dessiner.

Suite logique de l’étape précédente, on a ensuite évoqué la pixellisation. Cette activité me parlait beaucoup, car c’est précisément ce que je faisais quand j”avais l’âge de mon fils (ou presque). Sur le MO5 de la maison, il y a avait essentiellement deux jeux, sur cassettes: un jeu de voitures, et super tennis. Au tennis, les personnages étaient assez simples avec 4 positions possibles: l’attente, le service, le coup droit, et le revers. Ensuite, la figure était juste translatée (on avait déjà évoqué ce point il y a quelques mois).

Mais autant ça me parle énormément, autant je me suis demandé ce que ça pouvait dire aujourd’hui, car les images sont incroyablement lissées… On a pris une image sur mon ordinateur, et on a zoomé, zoomé…

… en vain. On ne voit plus vraiment les pixels sur les dessins. C’est dommage ! Heureusement, j’ai fini par trouver quelques exemple pour illustrer ce concept (finalement assez théorique) de la pixellisation. Je ne me sentais pas très à l’aise pour parler de lissage avec mon fils. Ce sont des choses que j’explique à l’occasion à mes étudiants, et ça me gêne que mon fils de 11 ans sache plus de choses que mes étudiants. En fait, ce dilemme m’a tiraillé tout au long de nos activités, pour être honnête… D’un côté, ce sont des exercices pour enfants de primaire, et de l’autre, ce sont des choses que certains étudiants en formation universitaire de mathématique se devraient de maîtriser.

Pour revenir un peu à l’activité, c’était amusant, et ça annonçait un peu le principe de la compression (qui viendrait avec la prochaine activité). Mais trouver une tasse, et une image de Saturne en coloriant des cases, ça a vite lassé mon fils. On a fini par abandonner sans vraiment finir l’activité. Cela dit, j’ai découvert par la suite qu’il y avait des activités amusantes en ligne, sur http://csunplugged.org/activities, avec en particulier une discussion sur les traits et les cercles (en anglais seulement)

  • section 3 : compresser et zipper

Cette activité était…. surprenante. On a joué avec des algorithmes de compression, type LZ77 utilisé pour zipper des fichiers. Le principe est assez amusant…. Par exemple. dans la phrase

on retrouve des blocs de lettres. Alors l’idée (difficile à suivre si on se limite aux indications données) est que si on a plus de 2 lettres qui se répètent, on peut faire un pointage.

On va alors remplacer le second bloc par de l’information expliquant aller chercher l’information,

Aussi, la phrase devient

Autrement dit, on lui dit de reculer d’un certain nombre de caractères, et d’utiliser un bloc de caractères, d’une certaine longueur. Amusant non ? En plus, on peut utiliser une écriture récursive, et remplacer par des lettres non encore définies. Comme le joli exemple suivant

On recule de 2 lettres, et on prend un bloc de 3 lettres. Amusant non ? C’est facile à faire, et on peut facilement lire un fichier compressé. Par contre, compresser soit même un texte est plus compliqué ! On s’est longtemps interrogé sur la marche à suivre, car l’aogorithme est mal expliqué. Manifestement, on part de la gauche, et si on retrouve un bloc de 2 caractères déjà lu (ou plus), on pointe dessus. Manifestement, selon ce qu’on y lit, on peut passer de 2500 caractères à 500. C’est dommage que la compression en question ne soit pas mentionnée. Par contre, on imagine bien que plus le texte est long, plus on va retrouver des blocs déjà vus. Et l’idée d’illustrer par un poème est brillante !

On s’est bien amusé pendant cette activité, on a fait des essais, mais on ne sait pas si ce qu’on fait est “optimal“. Cela dit,  m’a donné envie de lire davantage, d’autant que plusieurs articles évoquent ce point (et que la notion d’information et d’entropie se cachent derrière, mais on va la voir dans quelques sections).

  • section 4 : gérer les erreurs et l’exemple des codes barres

On a ensuite une activité réellement amusante, sur la gestion des erreurs. Ça commence par un petit tour de magie et l’écriture binaire…. On considère la grille suivante (avec les cartes utilisées dans la première activité: une face rouge, une face blanche)

Les couleurs ont étés mises au hasard… on a constitué une grille 5×5 en posant les cartes au hasard (ce sont les cartes sur le tapis bleu). Pour ceux qui sont autour… c’est le truc du tour de magie. Maintenant, je lasse mon fils retourner une carte au hasard, sur le tapis bleu (en fait, je pense qu’il peut retourner celle qu’il veut)

Pendant qu’il la retournait, je n’ai pas regardé, et il va falloir maintenant retrouver quelle carte a été retournée… En fait, le truc pour la retrouver, c’est que les cartes à droite, et en bas, ont été mises de telle sorte que le nombre de cartes rouges par ligne, et par colonne soit pair !

En plus, si deux cartes avaient été retournées, j’aurais aussi pu les retrouver. Amusant non ? L’idée est vraiment géniale…. Ensuite, on apprend à vérifier les codes d’identité bancaire ou de code ISBN, pour les livres. Oui, le code qui figure sur les livres, au dos.

Bon, le soucis est que sur ce code barre, il y a manifestement un problème, car le code généré n’est pas le bon. En fait, le code ISBN qu’on va utiliser est le suivant

> isbn=1466592591
> checkcode=function(n){
+ l=as.character(n)
+ while(nchar(l)<10) l=paste("0",l,sep="")
+ a=substr(l,1,9)
+ y=as.numeric(substr(l,10,10))
+ x=as.numeric(unlist(strsplit(a,"")))
+ s=sum(x*(10:2))
+ z=11-(s%%11)
+ return(z==y)
+ }

Sur notre code à 10 chiffres, le dernier chiffre va servir à vérifier si les premiers sont bons, ou pas. Tout est expliqué dans la définition du International Standard Book Number sur Wikipedia. C’est presque simple, à condition de maîtriser le reste de la division par 11 (comme le montre le code précédant)

> checkcode(isbn)
[1] TRUE

On a pris quelques livres dans la bibliothèque, et je lui demandais de me dicter le code…. C’était assez amusant comme exemple… Mais mon fils a voulu essayer un nombre au hasard… et il a trouvé un code ISBN du premier coup…. Bon, il avait une chance sur 12.

> mean(Vectorize(checkcode)(trunc(1e10*runif(1000000))))
[1] 0.08327
  • section 5 : jeu du “devine le chiffre auquel je pense” et système binaire

La dernière section a été de loin la plus intéressante ! On effleure l’information (au sens de Shannon) et le logarithme (en base 2)… J’avais évoqué des idées similaires dans un vieux billet, justement, suite à un jeu avec les enfants. On voit en particulier la construction par arbre de la méthode dichotomique… Par exemple, si je pense à un nombre (entier) entre 0 et 7, la méthode pour trouver le nombre, la plus efficace, est la suivante :

Ce qui est amusant, c’est qu’on retrouve la décomposition en base binaire des nombres,

avec d’abord 2² puis 2¹ et enfin 2⁰. Amusant, non ? On a ensuite fait un test (et ma fille est venu se joindre à nous). On a commencé par “choisir un nombre entre 1 et 100” puis “choisir un nombre entre 1 et 1000“, à tour de rôle. On a vu qu’il fallait environ 7 coups pour réussir avec 100 chiffres possibles, et une dizaine avec 1000. Sans parler de logarithme, on a vu qu’il fallait chercher les exposants de 2 pour attendre 100, ou 1000.

Mais le plus intéressant, c’est que j’ai commencé, pour montrer qu’on divise en deux, à chaque fois. Je commençais avec “plus grand que 50 ?” puis “plus grand que 25 ?” puis “plus grand que 12 ?” etc. Mon fils a opté pour une stratégie plus étrange, en commençant par couper au milieu “plus grand que 50 ?” puis (directement) “plus grand que 5 ?“. En fait, en jouant plusieurs fois, je me suis rendu compte que sa stratégie pouvait être (en quelque sorte) optimale, car sa sœur ne prend visiblement pas des nombres de manière uniforme entre les deux bornes: elle a tendance à prendre des petits ou des très grands nombres. Sur une partie, il a ainsi être plus rapide que ma stratégie soit-disant optimale, en gagnant en 5 coups. On a ensuite eu une longue discussion sur ce que pouvait être la stratégie optimale. C’était vraiment une discussion intéressante… qui s’est poursuivie avec l’activité suivante, de classement et de tri. Mais on en reparlera bientôt !

Quand on voit les ressources qui traînent en ligne, je ne comprends pas que l’enseignement de l’informatique en primaire ne soit pas obligatoire : c’est incroyablement ludique ! Bref, on s’est vraiment bien amusé avec ces activités, et on a appris plein de choses. Bon, peut être que le fait d’être resté autour de -30°C ces derniers jours n’a pas vraiment permis de proposer d’alternatives…