The conference, that we were organizing in July, in Montréal, is officially cancelled.
Tag Archives: Montréal
IME 2020, next July, in Montréal
Next July, we will organize IME 2020 at UQAM, in Montréal… The website is now online, http://ime2020.uqam.ca/, and registrations will be opened soon… We will organize some short course, after the three-day congress, on two days… More to come soon….
Talk at the statistical seminar
Tomorrow Thursday, I will give a talk at the statistical seminar at UQaM, on “Using Transformations of Variables to Improve Inference“. It’s based on old results, with a new perspective, related to recent work with Emmanuel Flachaire (not mentioned in the slides, but that I will discuss tomorrow).
On my way to Québec
This morning, I will flight to Montréal. I will go to Québec City tomorrow morning for a seminar and then I will be back in Montréal, for almost two weeks.
On my way to Montréal
Avant de descendre aux États-Unis, je serais pour quelques jours à Montréal. Ca sera l’occasion de revoir plein de monde… Je serais de retour en juillet. A oui, et toute la semaine prochaine, je squatterais sur Twitter le compte @EnDirectDuLabo, et j’en suis pas peu fier… Ca sera l’occasion de parler un peu du quotidien d’un enseignant-chercheur (plus chercheur qu’enseignant en cette periode de l’annee…). A suivre sur twitter uniquement…
Radial Graphs for Time Series
On How to: Weather Radials, there was a nice visualisation of temperatures. Since I am too old fashioned for ggplot2, I wanted to reproduce a similar graph with the old plot style.
Assume that daily temperature is in a vector X (e.g. temperature in Montréal, QC, in 2009). To get a radial plot, use
> n=length(X) > theta=seq(0,1-1/n,length=n)*2*pi > r=30+X > plot(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",xlab="",ylab="",axes=FALSE) > for(t in 1:n){ + if(X[t]>0) CL=rgb(0,0,1,.4) + if(X[t]<0) CL=rgb(1,0,0,.4) + if(X[t]==0) CL="white" + segments((30+X[t])*cos(pi/2-theta[t]),(30+X[t])*sin(pi/2-theta[t]),30*cos(pi/2-theta[t]),30*sin(pi/2-theta[t]),col=CL) + } > for(r in 10*seq(0,6)) lines(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",col="light blue")
Back in Montréal
I will be in Montréal for the next two weeks,
The blog will be off, for a while.
Crowded Cities, Paris, Hong Kong and Montréal
Over the past years, I’ve been living in different cities, all of them being completely different, compared with the others. I have been living in Paris, which is a big city in Europe, with a large suburban neighborhood, too (la banlieue).
Then, I’ve been living in Hong Kong, which is a larger city, in Asia.
It was crowded. I mean, it was the feeling I had, while I was living there. And more recently, I’ve been living in Montréal, in North America. Montreal is a large city. Or to be more specific, an island,
The three cities are quite different. Paris, 2.211 million unhabitants, and 105,4 km² (density 21,057 unhabitants per km²). Montréal, 1.621 million unhabitants, and three times wider 365.1 km² (density 4,441 unhabitants per km²). Hong Kong, 7.234 million unhabitants, and again three times wider 1,104 km² (density 6,553 unhabitants per km²). In Hong Kong, there are several hill where it is not possible to build anything: on a large part of the island, the density is null.
Continue reading Crowded Cities, Paris, Hong Kong and Montréal
Time for a break
This year has been long, so I’ll try to have a short breack before the new session. Two graduate course, a forthcoming book, and several research articles to should be finalized… but I’ll still try to post frequently material on the blog! Enjoy your holidays… and happy New Year !
A random walk ? What else ?
Consider the following time series,
What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,
> PP.test(x) Phillips-Perron Unit Root Test data: x Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758
If we look at the autocorrelation function, we do observe some persistence,
> acf(x,100)
Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies
as for some
(the so called Hurst index). But here, we start to observe unexpected ouputs,
> library(longmemo) > (d <- WhittleEst(x)) 'WhittleEst' Whittle estimator for fractional Gaussian noise ('fGn'); call: WhittleEst(x = x) time series of length n = 759. H = 0.9899335 coefficients 'eta' = Estimate Std. Error z value Pr(>|z|) H 0.98993350 0.02468323 40.1055 < 2.22e-16 <==> d := H - 1/2 = 0.49 (0.025) $ vcov : num [1, 1] 0.000609 ..- attr(*, "dimnames")=List of 2 .. ..$ : chr "H" .. ..$ : chr "H" $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ... $ spec : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...
or more precisely some non-expected values for Hurst parameter, which should be in
> confint(d) 2.5 % 97.5 % H 0.9415553 1.038312
Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,
> plot(d)
It is probablty time to ask where I found that series… To be honest, I did borrow it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use
> Y=2013 > M=1 > D=25 > url=paste( "http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html? timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02- 01&Year=",Y,"&Month=",M,"&Day=",D,sep="") > page=scan(url,what="character")
Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,
So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…
Econometric Modeling in Finance and Insurance with the R language
On February 15th, IFM2, the Institute of Financial Mathematics in Montréal will organize an (one day) Executive workshop on Econometric Modeling in Finance and Insurance with the R language. The event is not yet mentioned in the calendar, but the syllabus can be downloaded here. Additional details (slides and R code) will be available soon, on this blog. In the morning, it will be an introduction to the R langage, and in the afternoon, we will focus on applications,
- Principal components analysis and application to yield curves
- Regression tree, logistic regression and application to credit scoring
- Poisson regression and applications to claims reserving (IBNR) and projected mortality tables (LifeMetrics)
Comparer des notes entre pays
Et effectivement, dans un vieux rapport datant des années 90 (en maths et informatiques), je suis tombé sur le graphique suivant



ce qui donne le boxplot suivant
i.e. pour les cours algèbre linéaire et calcul on est effectivement sur des moyennes autour de 60. Prenons maintenant, a titre de comparaison, des cours de maths, dans une université française, en l’occurrence a Rennes, comme analyse (en première année), arithmétique et algèbre linéaire, on a des histogrammes beaucoup plus plats,
A titre de comparaison, j’ai été voir du cote du département de Sciences Économiques, avec macroéconomie, microéconomie et mathématiques appliquées, on a
i.e. on retrouve de belles gaussiennes pour les cours d’économie, mais une distribution assez différente pour le cours de maths,
On a effectivement des distributions assez différentes, généralement, en première année à l’université, entre le Canada et la France. Ce qui explique que les profs, lorsque les étudiants souhaitent passer d’un pays a l’autre ne regarde plus vraiment les notes, mais plus le rang de l’étudiant au sein de sa promotion. D’ailleurs, si on fait un graphique quantile-quantile, avec les cours de maths à Montréal versus les cours de maths a Rennes, on a le graphique suivant
autrement dit, une note de 50 à Montréal correspond à 33 en France (soit 6.5 sur 20), alors que 50 en France (10 sur 20) correspond a 59 à Montréal. On retrouve la encore le fait qu’une note médiane correspond à 60 ici… c’est a dire un C. Il va falloir que je m’adapte l’année prochaine….
Playing with robots
My son would be extremely proud if I tell him I can spend hours building robots. Well, my robots are not as fancy as Dr Tenma’s, but they usually do what I ask them to do. For instance, it is extremely simple to build a robot with R, to extract data from websites. I have mentioned it here (one tennis matches), but it failed there (on NY Marathon). To illustrate the use of robots, assume that one wants to build his own dataset to study prices of airline tickets. First, we have to choose a departure city (e.g. Paris) and an arrival city (e.g. Montreal). Then, one wants to look at all possible dates from April first (I ran it last month) till the end of December (so we create a vector with all leaving dates, namely a vector for the day, one for the month, and one for the year). Then, we choose a return date (say 3 days after).
DEP="Paris" ARR="Montreal" DATE1D=rep(c(1:30,1:31,1:30,1:31,1:31,1:30,1:31,1:30, 1:31,1:31,1:29),3) DATE1M=rep(c(rep(4,30),rep(5,31),rep(6,30),rep(7,31), rep(8,31),rep(9,30),rep(10,31),rep(11,30),rep(12,31), rep(1,31),rep(2,29)),3) DATE1Y=rep(c(rep(2011,30+31+30+31+31+30+31+ 30+31+31+28),rep(2012,31+29)),3) k=3 DATE3D=c((1+k):30,1:31,1:30,1:31,1:31,1:30,1:31, 1:30,1:31,1:31,1:29,1:k) DATE3M=c(rep(4,30-k),rep(5,31),rep(6,30),rep(7,31),rep(8,31), rep(9,30),rep(10,31),rep(11,30),rep(12,31),rep(1,31),rep(2,29), rep(3,k)) DATE3Y=c(rep(2011,30+31+30+31+31+30+31+30+31+ 31+28-k),re p(2012,31+29+k))
It is also possible (for a nice robot), to skip all prior dates
skip=max(as.numeric(Sys.Date()-as.Date("2011-04-01")),1)
Then, we need a website where requests can be written nicely (with cities and dates appearing explicitly). Here, I cannot not mention the website that I used since it is stated on the website that it is strictly forbidden to run automatic requests… Anyway, consider a loop create a url address (actually I chose the value of the date randomly, since I had been told that those websites had memory: if you ask too many times for the same thing during a short period of time, prices would go up),
URL=paste("http://www.♦♦♦♦/dest.dll?qscr=fx&flag=q&city1=", DEP,"&citd1=",ARR,"&", "date1=",DATE1D[s],"/",DATE1M[s],"/",DATE1Y[s], "&date2=",DATE3D[s],"/",DATE3M[s],"/",DATE3Y[s], "&cADULT=1",sep="")
then, we just have to scan the webpage, looking for ticket prices (just looking for some specific names)
page=as.character(scan(URL,what="character")) I=which(page%in%c("Price0","Price1","Price2")) if(length(I)>0){ PRIX=substr(page[I+1],2,nchar(page[I+1])) if(PRIX[1]=="1"){PRIX=paste(PRIX,page[I+2],sep="")} if(PRIX[1]=="2"){PRIX=paste(PRIX,page[I+2],sep="")}
Here, we have to be a bit cautious, if prices exceed 1000. Then, it is possible to start a statistical study. For instance, if we compare to destination (from Paris), e.g. Montréal and New York, we obtain the following patterns (with high prices during holidays),
It is also possible to run the code twice (here it was run last month, and a couple of days ago), for the same destination (from Paris to Montréal),
Of course, it would be great if I could run that code say every week, to build up a nice dataset, and to study the dynamic of prices…
The problem is that it is forbidden to do this. In fact, on the website, it is mentioned that if we want to extract data (for an academic purpose), it is possible to ask for an extraction. But if we do tell that we study specific prices, data might be biased. So the good idea would be to use several servers, to make several requests, randomly, and to collect them (changing dates and destination). But here, my computing skills – unfortunately – reach a limit….
911, jour après jour
Après deux billets (ici puis là) sur les cycles intrajournaliers des appels au 911, on peut se demander comment les crimes évoluent au cours de la semaine.
Pour l’ensemble des appels passés au 911, on a la distribution suivante
i.e. un pic les vendredi soir et samedi soir, et un creux le dimanche. Si on regarde les appels pour des cambriolages, on a la distribution suivante
avec des pics en matinée, les vendredi après midi, et les fins de semaine. On peut aussi suivre les troubles de la paix,
qui surviennent certes vers minuit, mais essentiellement en fin de semaine. Ce qui contraste assez avec les hold-ups,
Manifestement, il y a des tendances assez claires. La prochaine étape sera de regarder un peu les saisons, ou mieux, l’impact du climat…
à suivre donc…
You find it cold in Montréal ? trust me, it is even worse
As people say in Montréal, “aujourd’hui, il fait frette”. And I have been surprised recently when some people told my that we would reach -35°C Sunday evening… I checked around, and I found -25°C on all weather forecast websites. But nowhere -35°C. I asked some friends, and they told me that those people were not really looking at the air temperature (as we observe on the thermometer), but they were looking at the wind chill, also called “felt air temperature on exposed skin due to the wind” (température ressentie).
And indeed, such a quantity does exist, and can be found on theclimate.weatheroffice.gc.ca website. There is also a physical background for that quantity. Hence, the windchill is defined as

where is the air temperature (in °C), and
the wind speed (in km/h). Please don’t ask me how to interpret this power 0.16 (I already find difficult to explain a square root in an econometric equation). If we look at the past previous days we observe the following observations,
where points on top are temperature, while below we have felt temperature.So, basically, winters are even colder than what you might think..
And the story is not over, yet. The same thing holds for summer: if you take into account humidity, summer are even hotter than what you think… There is thehumidex, , defined here as

where denotes a dewpoint (see here for more details).
That index appeared in the 70’s, with a work of Masterson and Richardson entitled a method of quantifying human discomfort due to excessive heat and humidity (published in 1979).By that time, in Canada, on average, 22 people died, per year, because of those excessive heat and humidity. For those interested by the origin of that index, you can have a look here.
Recently, @Annmaria (here) told me that one might expect variance to increase, i.e. maximas should be increasing faster than minimas. I just wonder if this intuition can be related to the fact that more and more people (including some medias) now talk more about felt temperatures than measured temperatures. And if we compare past temperatures to felt temperature we have today, it looks like the difference between extremes is increasing….
Comments