With Russell Davidson, we will give a short course on Foundations of Data Science, at Lancaster University (actually, online), in the Data Science and Econometrics program.
Category Archives: Non classé
Pareto models for top incomes and wealth
Our article, Pareto models for top incomes and wealth, writen with Emmanuel Flachaire, just got published in the Special Issue of the The Journal of Economic Inequality, on “Finding the Upper Tail”.
Top incomes are often related to Pareto distribution. To date, economists have mostly used Pareto Type I distribution to model the upper tail of income and wealth distribution. It is a parametric distribution, with interesting properties, that can be easily linked to economic theory. In this paper, we first show that modeling top incomes with Pareto Type I distribution can lead to biased estimation of inequality, even with millions of observations. Then, we show that the Generalized Pareto distribution and, even more, the Extended Pareto distribution, are much less sensitive to the choice of the threshold. Thus, they can provide more reliable results. We discuss different types of bias that could be encountered in empirical studies and, we provide some guidance for practice. To illustrate, two applications are investigated, on the distribution of income in South Africa in 2012 and on the distribution of wealth in the United States in 2013.
« Plus un pays est égalitaire et prospère, moins on trouve de femmes en sciences »
Je suis tombé sur cette phrase au détour d’un article cet après midi, dans La Presse,
« Plus un pays est égalitaire et prospère, moins on trouve de femmes en sciences. C’est difficile à comprendre »
et j’avoue que je suis surpris. Surpris que cette relation soit forte, et significative, au point de l’énoncer comme une loi. Je ne sais pas comment se mesurent l’égalité et la prospérité, mais j’ai tenté l’espérance de vie à la naissance, sur http://data.uis.unesco.org/. Et “researchers FTE” (nombre de chercheurs en équivalents temps plein), femme divisé par le total, pour avoir un ratio de femmes en “Science, technology and innovation“. J’ai du mal à observer la relation décroissante…
A moins d’enlever une dizaines de pays à faible espérance de vie à la naissance (dont le Burundi, l’Éthiopie, la Gambie, l’Inde, Madagascar, le Pakistan, le Togo).
Si quelqu’un sait comment voire cette étonnante relation décroissante, je suis preneur !
R0 and the exponential growth of a pandemic, an update
A few days ago, I wrote a blog post – R0 and the exponential growth of a pandemic – where I was trying to generate some visualization of some exponential growth, in the context of a pandemic. After giving some thoughts, the previous graph might not be the best one to see an exponential based contagion.
Having graphs evolving, from the left to the right, gives us the (false) idea of some temporal evolution. Which is no necessarily correct. It simply means that contaminated people will contaminate other people, and we look at the number of iterations here. So maybe some concentric dots would look better.
And from a technical perspective, what I did was fun, but probably too complicated. In my previous post, I wanted to pack optimally k identical disks intro a unit circle. On http://hydra.nat.uni-magdeburg.de/packing, it was possible to get the “best known packings of equal circles in a circle”, with the coordinates. But as we will see, we can use something much more simple here.
My idea is now to create some picture like one below, with concentric colored dot. In the center, we have the first people that were contaminated, and then, we can see the transmission, somehow
From a technical perspective, here, I use a different strategy. I decided to draw random points, uniformly. The problem with randomness is the natural high discrepancy, with monte carlo methods: it is very likely that some disks will overlap. It is not a major issue, but it might distort the message. So I decided to use some low-discrepancy sequences, such as Halton‘s sequence.
library(randtoolbox) S = halton(n=5000, dim = 2)*2-1 |
Here, I have disk coordinates in [-1,+1]^2. Then, to get disks in a circle, I simply compute the distance to the origin (0,0),
D0 = S[,1]^2+S[,2]^2 |
and take the ranks. If I want to visualize k=200 people, I consider the 200 smaller ranks. To get concentric circles, each part having k_i individuals, I use as thresholds R_0^{\bar k_{i-1}},R_0^{\bar k_{i}},R_0^{\bar k_{i+1}}, etc, where \bar k_i=\bar k_{i-1}+k_i,
R0 = rank(D0,ties.method = "random") C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000) |
where
R0=1.8 k=round(R0^(seq(1,9,by=2))) |
Then we can plot the dots, with appropriate colors,
points(S,pch=19,col=colrpal[C0],cex=.75) |
And of course, we can try that with different values, for R_0
R0=2.2 k=round(R0^(seq(1,9,by=2))) kmax=max(k) S = halton(n=5000, dim = 2)*2-1 plot(S,col="light yellow",axes=FALSE,xlab="",ylab="",xlim=c(-1.3,1),ylim=c(-1,1),cex=.75,pch=19) D0 = S[,1]^2+S[,2]^2 R0 = rank(D0,ties.method = "random") C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000) points(S,pch=19,col=colrpal[C0],cex=.75) |
Modeling Joint Lives within Families
With Olivier Cabrignac and Ewen Gallic, we recently uploaded a research paper, entitled “Modeling Joint Lives within Families”
Family history is usually seen as a significant factor insurance companies look at when applying for a life insurance policy. Where it is used, family history of cardiovascular diseases, death by cancer, or family history of high blood pressure and diabetes could result in higher premiums or no coverage at all. In this article, we use massive (historical) data to study dependencies between life length within families. If joint life contracts (between a husband and a wife) have been long studied in actuarial literature, little is known about child and parents dependencies. We illustrate those dependencies using 19th century family trees in France, and quantify implications in annuities computations. For parents and children, we observe a modest but significant positive association between life lengths. It yields different estimates for remaining life expectancy, present values of annuities, or whole life insurance guarantee, given information about the parents (such as the number of parents alive). A similar but weaker pattern is observed when using information on grandparents.
The paper is online on https://arxiv.org/abs/2006.08446.
Slides 7 – modèle de Poisson
Lasso Regression (home made)
Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be
soft_thresholding = function(x,a){ sign(x) * pmax(abs(x)-a,0) } |
To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is
lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){ beta = as.matrix(beta) X = as.matrix(X) omega = rep(1/length(y),length(y)) obj = numeric(length=(maxiter+1)) betalist = list(length(maxiter+1)) betalist[[1]] = beta beta0list = numeric(length(maxiter+1)) beta0 = sum(y-X%*%beta)/(length(y)) beta0list[1] = beta0 for (j in 1:maxiter){ for (k in 1:length(beta)){ r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y)) beta[k] = (1/sum(omega*X[,k]^2))* soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda) } beta0 = sum(y-X%*%beta)/(length(y)) beta0list[j+1] = beta0 betalist[[j+1]] = beta obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta)) if (norm(rbind(beta0list[j],betalist[[j]]) - rbind(beta0,beta),'F') < tol) { break } } return(list(obj=obj[1:j],beta=beta,intercept=beta0)) } |
For instance, consider the following (simple) dataset, with three covariates
chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";") |
that we can “normalize” (or “standardize“)
X = model.matrix(lm(Fire~.,data=chicago))[,2:4] for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j]) y = chicago$Fire y = (y-mean(y))/sd(y) |
To initialize the algorithm, use the OLS estimate
beta_init = lm(Fire~0+.,data=chicago)$coef |
For instance
lasso_coord_desc(X,y,beta_init,lambda=.001) $obj [1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119 $beta [,1] X_1 0.0000000 X_2 0.3836087 X_3 -0.5026137 $intercept [1] 2.060999e-16 |
and we can get the standard Lasso plot by looping,
October, grant proposal season
In 2012, Danielle Herbert, Adrian Barnett, Philip Clarke and Nicholas Graves published an article entitled “on the time spent preparing grant proposals: an observational study of Australian researchers“, whose conclusions had been included in Nature under a more explicit title, “Australia’s grant system wastes time” ! In this study, they included 3700 grant applications sent to the National Health and Medical Research Council, and showed that each application represented 37 working days: “Extrapolating this to all 3,727 submitted proposals gives an estimated 550 working years of researchers’ time (95% confidence interval, 513-589)“. But in these times when I have to write my funding application, I find that losing 37 days of work is huge. Because it’s become the norm! And somehow, it’s sad.
Forget about the crazy idea that I would rather, in fact, spend more time doing my research. In fact, the thought I had this morning was that it is rather sad that in the Faculty of Science, mathematicians are asked to spend a considerable amount of time, comparable to that required of physicists or chemists, for often smaller amounts of funding… And I thought it could be easily verified. We start by retrieving the discipline codes
url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp" download.file(url,destfile = "GSC.html") library(XML) tables=readHTMLTable("GSC.html") GSC=tables[[1]]$V1 GSC=as.character(GSC[-(1:2)]) namesGSC=tables[[1]]$V2 namesGSC=as.character(namesGSC[-(1:2)]) |
We’re going to need a small function, to remove the $ and other symbols that pollute the data (and prevent them from being treated as numbers)
library(stringr) Correction = function(x) as.numeric(gsub('[$,]', '', x)) |
We will now read the 12 pages, and harvest (we will just take the 2017 data, but we could go back a few years before)
grants= function(gsc){ url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=2017&GSC=",gsc,sep="") download.file(url,destfile = "GSC.html") library(XML) tables=readHTMLTable("GSC.html") X=as.character(tables[[1]]$"Awarded Amount") A=as.numeric(Vectorize(Correction)(X)) return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100)))) } M=Vectorize(grants)(GSC[1:12]) |
The average amounts of individual grants can be compared,
barplot(M[2,]) |
In mathematics, the average grant amount is $24400. If we normalize by this quantity, we obtain
barplot(M[2,]/M[2,8]) |
In other words, the average amount of a (individual) grant in chemistry (to pay for students, conferences, etc.) is twice that in mathematics, 60% higher in physics than in maths…
We can also look at the median values (rather than the averages)
barplot(M[1,]) |
Here again, it is in mathematics that it is the weakest….
barplot(M[1,]/M[1,8]) |
in comparable proportions. If we think that the time spent writing should be proportional to the amount allocated, we should spend half as much time in math as in chemistry.
Cumulative functions can also be ploted,
plot(M[3:101,8],(1:99)/100,type="s",xlim=range(M)) lines(M[3:101,5],(1:99)/100,type="s",col="red") lines(M[3:101,4],(1:99)/100,type="s",col="blue") |
with math in black, physics in red, and chemistry in blue. What is surprising is the bottom part: a “bad” researcher in chemistry or physics will earn more than the median researcher in mathematics…
Now that my intuition is confirmed, I have to go back, writing my proposal… and explain to my coauthors that I have to postpone some research projects because, well, you know…
Tests non-paramétriques et simulations
Lors du dernier cours de statistique, nous avons présenter les tests d’ajustment de lois. Nous avions illustré ces tests à partir de la taille d’individus (déjà utilisé pour présenter l’ajustement de lois, et l’estimation de densité), correspondant à .
> Davis=read.table(
+ "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
> Davis[12,c(2,3)]=Davis[12,c(3,2)]
> X=Davis$height
> n=length(X)
On notera la statistique d’ordre, au sens où
Parmi les outils graphiques, nous avons vu le PP plot (graphique probabilité-probabilité) et le QQ plot (graphique quantile). Le code pour créer un PP plot peut être le suivant
> PP=function(Y,F=pnorm){
+ n=length(Y)
+ x=F(sort(Y))
+ y=seq(1/n/2,1-1/n/2,by=1/n)
+ return(data.frame(x=x,y=y))
+ }
qui représente (à un détail près) le nuage de points
et celui pour le QQ plot serait
> QQ=function(Y,Q=qnorm){
+ n=length(Y)
+ x=Q(seq(1/n/2,1-1/n/2,by=1/n))
+ y=sort(Y)
+ return(data.frame(x=x,y=y))
+ }
qui représente (toujours à un détail près) le nuage de points
où est la loi que l’on cherche à tester, au sens où on a avec comme hypothèse alternative .
Soutenance de thèse sur les inégalités
Ce jeudi, je participais au jury de thèse de Fattouma Souissi en tant que rapporteur, à l’université de Montpellier, sur les régressions PLS-Gini. Le manuscript de la thèse est désormais en ligne.
La tarification en assurance, et la sélection à l’université
En assurance, même si les primes sont calculées ex ante, on distingue la tarification a priori et a posteriori. Dans la tarification a priori on va utiliser des variables dites explicatives pour affecter dans une classe tarifaire. Par exemple en assurance automobile, on va utiliser l’âge, le genre, le lieu d’habitation, la puissance du véhicule pour prédire, a priori, la charge sinistre pour l’année à venir. Le principe de la tarification repose sur la loi des grands nombres qui consiste à faire demande un prime égale à ce que l’on pense payer – en moyenne – par classe de risque (la compagnie est alors – en moyenne – à l’équilibre financier). Mais on peut utiliser une tarification dite a posteriori où on utilise de l’information passé (par exemple le nombre de sinistres survenus les 3 dernières années, information que l’on peut retrouver, en France, dans le mécanisme bonus-malus). Si j’oppose ici les deux, on imagine que ces deux informations sont intéressantes, et peut être complémentaires. En revanche, il n’y a pas de tarification ex post avec ce mécanisme. Il y aura éventuellement un bonus qui sera offert l’année prochaine (et non pas de manière rétroactive), mais l’assureur ne peut pas rétropédaler en disant “finalement, vu la manière dont vous conduisez, la prime que vous nous devez pour l’année en cours n’est pas de 600 euros, mais de 900 euros“. La seule possibilité est de rediscuter, au moment du renouvellement, le montant de la prime, au vu de l’expérience.
Continue reading La tarification en assurance, et la sélection à l’université
2015, October 21
I am almost sure I have already lived that day before,
“A 99% TVaR is generally a 99.6% VaR”
Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that
the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level
which was inspired by a remark in Swiss Experience report,
expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk
Another Interactive Map for the Cholera Dataset
Following my previous post, François (aka @FrancoisKeck) posted a comment mentionning another package I could use to get an interactive map, the rleafmap package. And the heatmap was here easy to include.
The first part is still the same, to get the data,
> require(rleafmap) > library(sp) > library(rgdal) > library(maptools) > library(KernSmooth) > setwd("/home/arthur/Documents/") > deaths <- readShapePoints("Cholera_Deaths") > df_deaths <- data.frame(deaths@coords) > coordinates(df_deaths)=~coords.x1+coords.x2 > proj4string(df_deaths)=CRS("+init=epsg:27700") > df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84")) > df=data.frame(df_deaths@coords)
To get a first visualisation, use
> stamen_bm <- basemap("stamen.toner") > j_snow <- spLayer(df_deaths, stroke = FALSE) > writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 14)
and again, using the + and the – in the top left area, we can zoom in, or out. Or we can do it manually,
> writeMap(stamen_bm, j_snow, width = 1000, height = 750, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)
To get the heatmap, use
> library(spatstat) > library(maptools) > win <- owin(xrange = bbox(df_deaths)[1,] + c(-0.01,0.01), yrange = bbox(df_deaths)[2,] + c(-0.01,0.01)) > df_deaths_ppp <- ppp(coordinates(df_deaths)[,1], coordinates(df_deaths)[,2], window = win) > > df_deaths_ppp_d <- density.ppp(df_deaths_ppp, sigma = min(bw.ucv(df[,1]),bw.ucv(df[,2]))) > df_deaths_d <- as.SpatialGridDataFrame.im(df_deaths_ppp_d) > df_deaths_d$v[df_deaths_d$v < 10^3] <- NA > stamen_bm <- basemap("stamen.toner") > mapquest_bm <- basemap("mapquest.map") > j_snow <- spLayer(df_deaths, stroke = FALSE) > df_deaths_den <- spLayer(df_deaths_d, layer = "v", cells.alpha = seq(0.1, 0.8, length.out = 12)) > my_ui <- ui(layers = "topright") > writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 1000, height = 750, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16)
The amazing thing here are the options in the top right corner. For instance, we can remove some layers, e.g. to remove the points
or to change the background
To get an html file, instead of a standard visualisation in RStudio, use
> writeMap(stamen_bm, mapquest_bm, j_snow, df_deaths_den, width = 450, height = 350, interface = my_ui, setView = c( mean(df[,1]),mean(df[,2])), setZoom = 16, directView ="browser")
which will generate the html table (as well as some additional files actually) above. Awesome, isn’t it?