Somewhere else, part 63

Quote of the week

  • “economical numbers” (according to Richard Feynman) “There are 10^11 stars in the galaxy. That used to be a huge number. But it’s only a hundred billion. It’s less than the national deficit! We used to call them astronomical numbers. Now we should call them economical numbers.

and a very interesting post, on science, publication and funding,

And still, a lot of writings worth reading,

et un bon nombre de billets interessants, en francais, cette semaine, dont plusieurs sur le monde academique,

Did I miss something ?

Somewhere else, part 62

A nice article discovered this week, and a series of writings worth reading,

We study the research productivity of new graduates of top Ph.D. programs in economics.  We find that class rank is as important as departmental rank as predictors of future research productivity.  For instance the best graduate from UIUC or Toronto in a given year will have roughly the same number of American Economic Review (AER) equivalent publications at year six after graduation as the number three graduate from Berkeley, U. Penn, or Yale.  We also find that research productivity of top graduates drops off very quickly with class rank at all departments.  For example, even at Harvard, the median graduate has only 0.04 AER papers at year six [to be continued…]

Do academics publish more because they have better abilities (gender, age, type of position, or any other individual characteristic possibly unobserved) and a more rewarding publication strategy (research field, number and location of co-authors) or because they are located in departments that provide a better local environment with stronger externalities? Contrary to some recent results (Kim et al. 2009, Waldinger 2012), we find that location is an important determinant of the individual quantity and quality of publications. It represents at least half of the explanatory power of individual characteristics, which implies that local externalities in publication do matter. [to be continued]


A seemingly unintentionally ironic paper has just been published in Science titled “A HUMAN RIGHT TO SCIENCE“. I presume it’s an important paper because the title is in BLOCK CAPITALS. That’s about all I can tell you about it though, because it’s behind a paywall.  [to be continued…]

et toujours quelques billets et articles en francais,

Did I miss something ?

Somewhere else, part 61

Long time no see, buzy week. Very interesting (research) article discovered last week

and as always, several writings worth reading

Even if the awards do inspire young people, Stilgoe argues that they send out the wrong message. “The prizes reinforce the mythology of science in which lone geniuses come up with brilliant ideas on their own,” he says.

And some say that the prize money would be better used to drive research directly. In 2011, for example, Venter joined forces with the X Prize Foundation and health-care firm Medco Health Solutions, based in Northampton, UK, to offer a US$10-million prize to the first team to sequence accurately the genomes of 100 centenarians. “I’m always more excited by awards that push or drive innovation, rather than ones that just recognize past achievements,” Venter says.

Many researchers favour the idea of targeting awards at promising scientists early in their careers. “A small award at this stage is a fantastic idea,” says Cooper. At this point, scientists are in a vulnerable position, struggling to win grants and often supporting young families. “It will just free scientists up to do more research — it’s about getting the biggest bang for your buck,” Cooper says.

et plusieurs articles et posts en français,

Premier phénomène, la nullité médiatique française en la matière. Il faut bien le dire, à peu près aucun journal digne de ce nom n’a été capable de fournir un aperçu simple et synthétique des gender studies (disons au moins de Butler). A minima, il aurait fallu un feu nourri de critique pour expliquer que « la théorie du genre » n’existe pas et qu’il faut arrêter d’utiliser cette expression. Mais non, on a laissé les éditorialistes tremper leur plume et se répandre sur le sujet, pour prendre position sur « la théorie du genre ». La France, ce pays où la chasse au Dahu est un sport national.

Second phénomène, l’inefficacité universitaire, c’est à dire le manque d’effet de l’université sur la Cité. L’effort de vulgarisation est globalement très faible, le champs des études du genre est beaucoup trop ésotérique, et le temps passé à défendre sa légitimité a globalement été mal employé.

Did I miss something ?

Chicago, Baseball and Paul Erdös

Thursday afternoon, before the 2013 CAE Faculty Conference, Stuart Klugman should invit us to go and watch the Cubs playing, in Chicago. That should be fun. First baseball game, ever. I will be back in Montréal (and on the blog) next week !

That will be an opportunity to discuss with mathematicians and baseball fans. Actually, a colleague told me that there was a nice anecdote about baseball and mathematics. Hank Aaron, “considered to be one of the greatest baseball players of all time” is supposed to have an Erdös number of 1 (see e.g. http://boolesrings.org/mpawliuk/….). Some pretend that it is only because Hank Aaron has signed the same baseball as Paul Erdös (thus, they cosigned something, giving him the Erdös number 1) while both of them were invited in some ceremony to get honorary diplomas… The funny part is that, even if he was not a mathematician (but has an Erdös number of 1), he also has named some numbers, the so-called Ruth–Aaron pairs. The story is nice, actually. On April 1974, Hank Aaron become famous for swatting his 715th home run. The prior record was held by Babe Ruth, with (have a wise guess…) 714 home run. Three mathematicians in Georgia (including Carl Pomerance) notice that 714 × 715 was not a common pair of consecutive number. It consists of two consecutive integers for which the sums of the prime factors of each integer are equal, since

714 = 2 × 3 × 7 × 17

715 = 5 × 11 × 13

and

2 + 3 + 7 + 17 = 5 + 11 + 13    (= 29)

Those are Ruth-Aaron pairs, see e.g. http://mathworld.wolfram.com/… or Pomerance (1999). Note that Carl Pomerance published more than 40 papers with Paul Erdös. End of the loop.

Somewhere else, part 60

Several writings worth reading, this week,

  • “Why Theoretical Computer Scientists Aren’t Worried About Privacy” http://jeremykun.com/… via msgbi
  • [lego] “population growth and the gaps in wealth and carbon footprint” via flowingdata

et quelques billets en francais,

et quelques pensées suite à l’élection (partielle) pour les francais d’Amérique du Nord,

> el2012=read.table(
+ "http://komodo.regardscitoyens.org/public/legislatives2012/resultats/LEG_2012_T2_resultats_BE_circonscriptions.csv",
+ header=TRUE,sep=";",dec=",")
  • Frédéric Lefebvre élu avec 8010 voix, score le plus bas de l’hémicycle (hors triangulaires) …
> T=table(el2012$code)
> triangulaire=names(T)[trian]
> el2012b=el2012[el2012$code%notin%triangulaire,]
> elu=el2012b$non
> N=el2012b$Voix
> Xelu=N[elu=="oui"]
> lefebvre=8010
> Xelu=c(lefebvre,Xelu)
> boxplot(Xelu,horizontal=TRUE,col="light blue")
> abline(v=lefebvre,col="red")
  • …dans une circonscription de plus de 150,000 inscrits. Mais l’important, c’est de gagner, non ?
> N=el2012b$Voix
> insc=el2012b$Inscrits
> Xelu=N[elu=="oui"]/insc[elu=="oui"]
> Xelu=c(lefebvre/156645,Xelu)
> boxplot(Xelu,horizontal=TRUE,col="light blue")
> abline(v=lefebvre/156645,col="red")

Did I miss something ?

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…

Somewhere else, part 59

Several writing worth reading, this week, outside my blog,

et un seul document en francais cette fin de semaine

Did I miss something ?

Eat a beaver, save a tree

Wednesday, just before leaving the office, I remembered I wanted to buy Andreas Kyprianou’s book, on Lévy processes. A second edition is coming soon, but I just need a simple introduction to Lévy processes, so I thought that this first edition should be complicated enough for me. And when a second edition should appear soon, you can get a discounted version of the (almost) old one. So I went on Springer’s wesite to purchase the book. I did pay for the book, and finished packing, in order to go back home. I did receive my confirmation order, which is standard, and I opened it,

Wait! “eBook“, “Download PDF“? What does that mean, I thought I was buying a book, like those we can hold in our hands…  Indeed, on Springer’s wesite the default version seems to be that eBook, and you have to look, in the back, to get a Softcopy, which should mean a hardcopy with a softcover. I have to admit I started to freak out. I never buy eBook! I am extremely old fashion! Those who know me know that I even print pages from the internet to read them! Anyway, I sent emails to all possible contacts I could found on Springer’s wesite, trying to get in touch with someone who can cancel my purchase. I mean, I did not download to PDF, so, from a technical point of view, each customer does have a dedicated link, and they should know I did not download it! So, until I download the PDF by clicking on the link, somehow, I do not have it, right? So I did ask them to delete the link, refund me, and then I will get back on their website to purchase the book (after two days of discussion by emails, they keep telling me I did buy a book, so I have to admit that I do not know which word I should use to describe that antique object made of paper). Actually, when I said that I made a mistake, that I just wanted to return the product I did not consume (I do not know how to return a link actually), that I wanted at the first place to get a paper copy of the book, I got that legendary answer,

Dear Arthur Charpentier,

Thank you for your email and interest in our products.

This is to inform you that it is irrelevant for us to proceed with your request, because it has already been entered into our database/system.

However, when you have downloaded the PDF copy of the E-book. You can print manually through your printer.

If you want a paper copy of a book, “you can print manually through your printer.” At first, I thought it was some kind of misunderstanding. Or joke, maybe. But no. You cannot cancel a purchase when you order eBooks. And to make sure that I got the book, they did send me the full pdf in my mail box. What I am supposed to do with that file? This is not what I wanted! I wanted a book! a book with paper you can hold in your hands! with paper, made from trees that died so that I can learn stuff!

Anyway, I gave up… I will ask colleagues if I can borrow their copies. Now, I have to fight with Dell since I ordered a laptop (yes, the Ubuntu version), and it did arrive at the office in a wet box. Looks like the computer (at least the box) has been staying in the water for a very long time! I don’t know if people around still believe that researchers actually do research when they have time… trust me, they don’t! They discuss with Customer Services… and it can take a while!

Bayes, credit scoring and terrorism

Once again, my neighbor Corey did publish a very interesting post on his blog http://bayesianbiologist.com/… on how likely the NSA program will catch a terrorist (a real one). I was working on something similar last weeks, with Stéphane Tufféry, for our chapter, entitled Statistical Learning in Actuarial Science. The idea was to show credit scoring techniques, from logistic regression, classification trees, random forests, etc. Of course, it is more boring, since we talk about loans and not terrorism. In credit scoring, we consider possible loans, and we have to predict if someone is more likely to be a bad guy or a good guy. The idea is the same: based on some covariates, we need to build a score function, that can be related to the probability of being bad. The higher the score, the more likely the person will be a bad guy. Then, of course, we have to discuss errors, namely false positive (good guys that we think are bad) and false negative (bad guys that we think are good). From the company, you do not want to have bad guys in your portfolio, and from everyone else point of view (since everyone believes he is with the good one, this is a classical optimistic bias), we do not want to be confused with those bad guys. Then we can spend hours on classification curves, and criteria to assess if our classifier is good or not, etc. While I was writing the introduction of the chapter, I remember that I found it hard to find proper words (to describe that 0/1 problem). But I did use (like everyone else) the terms good and bad. Like in terrorism. Except that to use this terminology (bad and good), we have to be more specific. In credit scoring, a bad guy is someone who did not pay back, at least once, for instance. But in terrorism, I think it is more difficult to say what a terrorist is.

I mean, in France, we did experiment terrorism too, a few years ago. In December 1996, I was in a RER train, going South, and we reached Cité Universitaire when a bomb did explode in Port Royal. The train following mine I guess. I remember that a couple of days after, I was traveling Paris, in bus, carrying with me a nice plant of… a plant that you’re not supposed to grow. Say I was carrying sandwiches, according to Ted Mosby. So in order to avoid troubles (since I was not suppose to have this kind of plant species), I put it in a large box. I remember that people were starring at me, and some actually asked me what was in the box. So for some reasons, people try to build there own terrorist classifier, based on what they think might be covariates. And dirty trousers, not well shaved, long hair (yes, I used to have long hair) and box in the bus were obviously some of them. Note that I don’t blame them, I do the same! After reading Corey’s post this morning, I took the bus. And I saw someone with a ninja sword.

At first, my terrorist classifier put her (yes, I try to have a gender-free terrorist model) in the bad guy class. Then I understood it was an umbrella. So I put her in the super cool geeky category (that only a few can reach).

When I started to teach non-life insurance in Paris, the last part of the course was dedicated to large risks, natural catastrophes, and a hot topic: terrorism. I was giving this course (probably my best experience, ever) in tandem with François Bucchini, who was working by that time for AXA France. The two of us were giving the course together, interacting: I was the boring guy doing the maths, and François was sharing his experience. And by that time, he was involved in the creation of GAREAT, a market structure, launched in France in 2002, to propose reinsurance against terrorism (for French companies). And one of the first claim was from the CAV (which is a pun for Comité d’Action Viticole) considered as a terrorist group. So, as he told us, be careful of prejudices when you think about terrorism. Cool wine drinkers can be dangerous terrorists…

Actually, I would love to see covariates used by the NSA to predict if you’re a bad guy, or a potentially dangerous terrorist. Let us have a guess… You have asked for a visa for Pakistan? or Afghanistan? or Libya (not Libya, not yet bad guys still have good friends there)? You have a NRA membership? You bought some heavy metal on iTunes? You still have a stop acta sticker on your blog? you have a blog? you wrote a post including the word terrorist in it?

Note: I am supposed to be in Chicago next week. Si if I cannot enter in the U.S., we’ll probably know more about potential covariates.

Somewhere else, part 58

A series of interesting writings, that are worth reading, discovered this first part of the week,

ainsi que quelques écrits en français,

Did I miss something ?

How old is the oldest person you know?

Last week, we had a discussion with some colleagues about the fact that – in order to prepare for the SOA exams – we did not have time (so far) to mention results on extreme values in our actuarial program. I did gave an introduction in my nonlife actuarial models class, but it was only an introduction, in three hours, in order to illustrate reinsurance pricing. And I told my students that if they wanted to know more about extreme values, they should start a master program in actuarial science and finance, since I will give a course on extremes (and copulas) next winter.

But actually, extreme values are everywhere ! For instance, there is a Prudential TV commercial where has people place large, round stickers on a number line to represent the age of the oldest person they know. This forms some kind of histogram. The message is to have Prudential prepare you to have adequate money for all these years. And actually, anyone can add his or her own sticker at the Prudential website.

Patrick Honner, on his blog (http://mrhonner.com/…), did mention this interesting representation. But this idea is not new, as mentioned in a post, published three years ago. In 1932, Emil Gumbel gave a talk in France on the “âge limite“. And as he wrote it “on peut donc supposer que la distribution de l’âge limite – c’est à dire la probabilité que cet âge ait une valeur donnée – soit Gaussienne“. In 1932 (not aware of Fisher and Tippett work, he thought that the limiting distribution for a maximum would be Gaussian). But a few years after, he read about Fisher’s work, and observed also that “la distribution d’une valeur extrêmes peut être représentée pour un nombre suffisant d’observations par la formule doublement exponentielle, pourvu que la distribution initiale se comporte asymptotiquement comme une exponentielle. La formule devient rigoureuse si la distribution initiale est exponentielle“, as he wrote in 1935. And in 1937, he wrote a paper on “les centennaires” that can also be related to the work of Bortkiewicz on rare events. One should also mention one of the most important paper in extreme value theory, published in 1974 by Balkema and de Haan, on Residual Life Time at Great Age.

Because in this experiment, the question is “How Old is the Oldest Person You Know?“, so it is the distribution of a maximum. And from Fisher-Tippett theorem, if we assume that the age is bounded (and that there exists some finite upper limit), then the limiting distribution for the maxima (or to be more rigorous, a affine transformation of the maxima) should be Weibull distribution. And this is what it looks like

> plot(-x,dweibull(x,2.25,4),type="l",lwd=2)

As an actuary, the only thing I know about demography, is the distribution of the age of death. For instance, consider the following French life table

> alive <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ sep=";",header=TRUE)$Lx
> nb= -diff(alive)
> ages=0:110
> plot(ages,nb,type="h")

This is the distribution of the age of the death in a given population. Which is not the same as the distribution mentioned above! What we look for is the following: given that someone is alive, what could be the distribution of his-her age ? Actually, if we assume that the yearly number of birth is constant with time (as well as death probability), then we can compute easily to number of people of age https://latex.codecogs.com/gif.latex?x : we take everyone born (exactly) https://latex.codecogs.com/gif.latex?x years ago, and remove all those who died at at https://latex.codecogs.com/gif.latex?x, https://latex.codecogs.com/gif.latex?x-1, etc. So the function should be

> probadeath=nb/sum(nb)
> nbx=function(x) 1-sum(probadeath[1:(x+1)])
> surv=Vectorize(nbx)(ages)
> distrage=surv/sum(surv)

which looks like

But this assumption of constant number of birth is not that relevent. And actually, what we need is the distribution of the age within a population… This is a population pyramid, actually. The French one can be downloaded from http://www.insee.fr/fr/ppp/bases-de-donnees/….

> population <- read.table("popinsee2007.csv",sep=";",header=TRUE)$POPTOT07
> ages=0:107
> plot(ages,population/sum(population),type="h")

(the red line being the one obtained previously, using some natality assumptions). Now, let us use this population to generate acquaintances.

> agemax=function(nsim=1000,size=20){
+ agemax=rep(NA,nsim)
+ for(i in 1:nsim){
+ X=sample(ages,prob=population/sum(population),size=size,replace=TRUE)
+ agemax[i]=max(X)}
+ return(agemax)}

Here, we assume that everyone knows 20 other people, randomly chosen in the entire population, then we return the age of the oldest. And we do that for 1,000 people. Here is the distribution, we obtain

> XS=agemax(10000,20)
> plot(table(XS)/length(XS),type="h",xlim=c(0,108))

where the red line is a Weibull distribution (a transformed one, actually, since in extremely value theory, the distance to the upper bound of the distribution has a Weibull density),

> library(MASS)
> fit=fitdistr(108-XS,dweibull,list(shape=1,scale=1))
> lines(ages,dweibull(108-ages,fit$estimate[1],fit$estimate[2]),col="red")

Which is quite close to the distribution obtained in the commercial, don’t you think ? But still, it should be possible to be more accurate, since people should think of their parents, or grandparents. So I guess it could be possible to build a more accurate algorithm, to get something closer to the distribution obtained on the Prudential website. But first, let us wait to have more stickers, more observations… and then I’ll be back to play with it !