Category Archives: CAS in R

A quelle distance d’une banque habite-t-on ?

Dans le cadre du projet de R de la formation en Data Science pour l’Actuariat, je vais continuer à mettre en ligne des morceaux de codes qui peuvent être utiles, dans un contexte spatial. Le dernier billet, sur cartographier le vote pour le Brexit, avait été repris (et bien amélioré) sur le site des voisins de rgeomatic. Aujourd’hui, je vais m’inspirer du travail d’Etienne Flichy qui mixe répartition de la population sur le territoire, et localisation des agences bancaires.

On parle des banques ici, mais si on a une base avec les coiffeurs, les boulangeries, etc, on peut faire la même chose ! (autant dire qu’on va pouvoir s’amuser quand la base sirene sera rendue ouverte – dans les semaines à venir). On va supposer que l’on a une base avec toutes les banques géocodées. Bon, pour l’exercice, on va utiliser la localisation des agences bancaires, en utilisant les données de cbanque.com. C’est assez facile d’aller scraper le site, quand on regarde la façon dont sont faites les pages, e.g. http://cbanque.com/pratique/agences/credit-cooperatif/35/. Là on récupère les adresses (postales) et on peut utiliser https://adresse.data.gouv.fr/csv/ (ou différents outils) pour géolocaliser les adresses.

Continue reading A quelle distance d’une banque habite-t-on ?

Cartographier le vote pour le Brexit

Je suis, ces temps-ci plongé dans les projets de R que j’avais donnés pour la formation Data Science pour l’Actuariat. Comme j’ai eu plein de choses intéressantes, je me suis dit que je pourrais tenter de reprendre dans des billets sur le blog (en plus, ça me permet de vérifier au passage que les codes tournent).

Pour commencer la série (oui, il y en aura d’autres), Flavien Thery a proposé une cartographie du vote pour le Brexit. La première étape était de récupérer un fond de carte (on va utiliser la seconde zone administrative ici)

library(sp)
library(raster)
download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/GBR_adm2.rds","GBR_adm2.rds")
UK=readRDS("GBR_adm2.rds")
UK@data[159,"HASC_2"]="GB.NR"
plot(UK, xlim = c(-4,-2), ylim = c(50, 59), main="UK areas")

Cette carte est un peu surprenante… parce qu’on est sur le Royaume uni, ce qui inclu l’Angleterre, le Pays de Galles, l’Ecosse et l’Irlande du Nord. Et l’Irlande du Sud n’est pas sur la carte… Autant la rajouter (ça sera plus joli si on met du bleu pour l’eau)

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/IRL_adm0.rds","IRL_adm0.rds")
IRL=readRDS("IRL_adm0.rds")
plot(IRL,add=TRUE)

Mais on commence à rajouter l’Irlande, pourquoi pas la France, qui est juste en bas à droite, et qu’on devrait voir un peu…

download.file("http://biogeo.ucdavis.edu/data/gadm2.8/rds/FRA_adm0.rds","FRA_adm0.rds")
FR=readRDS("FRA_adm0.rds")
plot(FR,add=TRUE)

Bon, arrêtons là….

On peut ensuite récupérer les données du référendum sur le Brexit (que je stocke sur le blog histoire de gagner un peu de temps)

loc="http://freakonometrics.hypotheses.org/files/2016/12/EU-referendum-result-data.csv"
referendum=read.csv(loc,header=TRUE,dec=".",sep=",",stringsAsFactors = FALSE)
referendum=referendum[c(3,6,13,14)]
library(plyr)
referendum=ddply(referendum,.(Region,HASC_code),summarise,Remain=sum(Remain),Leave=sum(Leave))

On peut retrouver que le Brexit a gagné, avec 51,89% des votes exprimés (ce qui est conforme à ce que dit wikipedia)

> sum(referendum$Leave)/(sum(referendum$Leave)+sum(referendum$Remain))
[1] 0.5189184

On peut regarder, région par région, si c’est le leave ou le remain qui a gagné, en utilisant

referendum=referendum[c(referendum$Region!="Northern Ireland"),]
referendum=referendum[c(referendum$HASC_code!="Gibraltar"),]
row.names(referendum)=seq(1,nrow(referendum),1)
leave_or_remain=cbind(referendum,"Brexit?"=0)
leave_or_remain[,"Brexit?"]=ifelse(leave_or_remain$Remain<leave_or_remain$Leave,rgb(1,0,0,.7),rgb(0,0,1,.7))
map_data=data.frame(UK@data)
map_data=cbind(map_data,"Brexit"=0)
for (i in 1:nrow(map_data)){
if(map_data[i,"NAME_1"]=="Northern Ireland"){
map_data[i,"Brexit"]="blue"}else{
map_data[i,"Brexit"]=as.character(leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],'Brexit?'])
}
}
plot(UK, col = map_data$Brexit, border = "gray1", xlim = c(-4,-2), ylim = c(50, 59), main="How the UK has voted?", bg="#A6CAE0")
plot(IRL, col = "lightgrey", border = "gray1",add=TRUE)
plot(FR, col = "lightgrey", border = "gray1",add=TRUE)

legend(-1,59,c("Leave","Remain"),fill=c("red","blue"),bty="n")

(on rajoute une petite légende pour être plus clair). Mais on peut aller plus loin en représentant le pourcentage obtenu, par région, par le camp du leave. Pour ça, on peut utiliser le package cartography

library(cartography)
cols <- carto.pal(pal1 = "red.pal",n1 = 5,pal2="green.pal",n2=5)
map_data=data.frame(UK@data)
map_data=cbind(map_data,"Percentage_Remain"=0)
for (i in 1:nrow(map_data)){
if(map_data[i,"NAME_1"]=="Northern Ireland"){
map_data[i,"Percentage_Remain"]=55.78}else{         map_data[i,"Percentage_Remain"]=100*leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Remain"]/(leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Remain"]+leave_or_remain[leave_or_remain$HASC_code==map_data$HASC_2[i],"Leave"])
}
}
plot(UK, col = "grey", border = "gray1", xlim = c(-4,-2), ylim = c(50, 59),bg="#A6CAE0")
plot(IRL, col = "lightgrey", border = "gray1",add=TRUE)
plot(FR, col = "lightgrey", border = "gray1",add=TRUE)
choroLayer(spdf = UK,
df = map_data,
var = "Percentage_Remain",
breaks = seq(0,100,10),
col = cols,
legend.pos = "topright",
legend.title.txt = "",
legend.values.rnd = 2,
add = TRUE)

(là encore, une légende permet d’aider à lire la carte). Amusant, n’est ce pas ?

R in Insurance, 2017

Following the successfull conferences in London (2013, 2014, 2016) and in Amsterdam (2015), the next edition will take place in Paris. The R in insurance 2017 will take place in ENSAE on June 8.

This one-day conference will focus again on applications in insurance and actuarial science that use R, the lingua franca for statistical computation. The intended audience of the conference includes both academics and practitioners who are active or interested in the applications of R in insurance. The two invited speakers are Katrien Antonio (KU Leuven) and Julie Seguela (Covea). It will be a nice event !

Formation R à la CIMA, au Gabon

Je vais bientôt descendre une semaine au Gabon, au siège de la CIMA, la Conférence Interafricaine des Marchés d’Assurance, pour une formation à R.

Souhaitant un cours aussi interactif que possible, je n’ai pas prévu de transparents. Je mettrais toutefois en ligne des documents, si nécessaire. J’avais fait un billet, la semaine passée, pour expliquer comment importer les tables de mortalité. On travaillera sur ces bases pour apprendre à manipuler les fonctions de base de R, lundi.

Sinon, en guise d’introduction, je mentionnerais quelques livres ou notes de cours, en pdf,

  • “R pour les débutants” par Emmanuel Paradis, (pdf)
  • “Introduction à la programmation en R, Quatrième édition” par Vincent Goulet (pdf)
  • “Brise Glace-R (ouvrir la voie aux pôles statistiques)” par Andrew Robinson et Arnaud Schloesing (pdf)
  • “Introduction à R” par Julien Barnier (pdf)
  • “Aide mémoire R” par Mayeul Kauffmann (pdf)
  • “Lire ; Compter ; Tester… avec R”  (pdf)
  • “L’Actuariat avec R” par Arthur Charpentier et Christophe Dutang (pdf)

et je peux aussi mentionner les slides d’Ewen Gallic,

(pour les derniers transparents, je doute toutefois qu’on parlera de ggplot2, je pense qu’on se limitera aux fonctions de base). A suivre…

How to import some parts of a large database

In the introduction of Computational Actuarial Science with R, there was a short paragraph on how could we import only some parts of a large database, by selecting specific variables. The trick was to use the following

> read.table.select.columns=function(datatablename,
I,sep=";"){
+ datanc=read.table(datatablename,header=TRUE,
sep=sep,skip=0,nrows=1)
+ mycols=rep("NULL",ncol(datanc))
+ names(mycols)=names(datanc)
+ mycols[I]=NA
+ datat=read.table(datatablename,header=TRUE,
sep=sep,colClasses=mycols)
+ return(datat)}

For instance, if we use the same dataset as in the introduction, we can import only two variables of interest,

> loc="http://myweb.fsu.edu/jelsner/extspace/extremedatasince1899.csv"
> dt1=read.table.select.columns(loc,c("Region",
"Wmax"),sep=",")
> head(dt1,10)
    Region      Wmax
1    Basin 105.56342
2    Basin  40.00000
3    Basin  35.41822
4    Basin  51.06743
5  Florida  87.34328
6    Basin  96.64138
7     Gulf  35.41822
8       US  35.41822
9       US  87.34328
10      US 106.35318
> dim(dt1)
[1] 2100    2

Continue reading How to import some parts of a large database

Generating Hurricanes with a Markov Spatial Process

The National Hurricane Center (NHC) collects datasets with all  storms in North Atlantic, the North Atlantic Hurricane Database (HURDAT). For all sorms, we have the location of the storm, every six jours (at midnight, six a.m., noon and six p.m.). Note that we have also the date, the maximal wind speed – on a 6 hour window – and the pressure in the eye of the storm.

It is possible to run the following function

library(XML)
extract.track=function(year=2012,p=TRUE){

Continue reading Generating Hurricanes with a Markov Spatial Process

R package for Computational Actuarial Science

A webpage for the book is now hosted on

http://cas.uqam.ca/

So far, it is a very basic page, but information regarding the package can be found there. For instance, to install the package, with all the datasets, the R code is

> install.packages("CASdatasets", repos = "http://cas.uqam.ca/pub/R/")

The reference manual provides a description of all datasets.

Bayesian Wizardry for Muggles

Monday, I will be giving the closing talk of the R in Insurance Conference, in London, on Bayesian Computations for Actuaries, or to be more specific, Getting into Bayesian Wizardry… (with the eyes of a muggle actuary). The animated version of the slides (since we will spend some time on MCMC algorithm, I thought that animated graphs could be more informative) can be downloaded from here.

Those slides are based on the chapter writen with Ben Escoto for the Computational Actuarial Science with R book, and some previous work.

Computational Actuarial Science

After some delay, the book Computational Actuarial Science with R is now annonced for July 2014. I don’t know if we will be able to get copies for the R in Insurance conference, in London, but I guess everyone is working on it. And kindly, CRC sent me the following flyer, with some reduction code to save 20% (when ordering on CRC’s website).

Computational Actuarial Science

Last week, we’ve been through the book, completely, one last time, before sending it back to the publisher, with some comments and remarks, before publication ! So, this is it, the book will finally appear soon ! It was schedule for this week actually, but… you know. It should appear sometime by the end of May, or beginning of June. I will keep you posted on this blog.

A few months ago, we published with Christophe Dutang an ebook on the same topic, in French, online on cran.r-project.org/doc/contrib/. This contribution was based on lecture notes we had. When I’ve been asked to publish an English version, by John Kimmel, I was honored, but I thought it would be some kind of fraud if I write a book on that topic. I do know a bit of actuarial science, and a bit of R, but most of the advanced computations rely on others packages. Because I am extremely lazy, I did not try (so far) to edit my own package. I frequently publish some lines of codes on my blog, but nothing too serious.

So, for this book, I decided to ask those who actually did publish a package used in actuarial computations (or who did work previously on packages comparison for instance) to write a chapter, in this book. I am usually not a big fan of books with twenty contributors, because there is no coherence. So here, my task was to link all those chapters together, to make sure that notations are coherent, etc. Over 700 pages, that was difficult. And I asked all of them not only to illustrate actuarial concepts with some R code, but also give – if possible – some self written function, to understand the algorithm, but also some built-in functions. The goal was to explain the core of the algorithm. Some codes might not be efficient, but they help to understand how it could be possible to compute some actuarial quantities.

The first chapter is an Introduction (to the R language) I wrote with Rob Kaas (everyone in the actuarial community knows Rob ! not only as the Editor of Insurance: Mathematics & Economics. but also as a prolific author, including the popular textbook Modern Actuarial Risk Theory – Using R). The aim is to help those who might use another language for actuarial computation to understand the basics of the grammar, to read and write in R. I will probably publish a longer post, to explain the structure of that chapter. And show some codes.

  • Methodology 

The first section is a very general methodology section. It starts with Standard Statistical Inference by Christophe Dutang (Christophe is extremely active in the R community, as the maintainer of the Distributions task view page, for instance).Then, Ben Escoto (Ben works in the insurance industry, and launched the actuarial vignettes in R a few years ago) and myself, wrote a chapter which can be seen as an introduction to the Bayesian Philosophy for actuaries (I will give a talk on that topic at the R in Insurance conference this summer, so additional material will be online soon). With Stéphane Tufféry, we Statistical Learning (Stéphane published a Data Mining and Statistics for Decision Making a few years ago). Then, I wanted a chapter dedicated to Spatial Analysis. I did ask Renato Assunção, Marcelo Azevedo Costa, Marcos Oliveira Prates, and Luís Gustavo Silva e Silva to write that chapter (I met them a few years ago while I was visiting Renato in Belo Horizonte, while that started to work on spatial aspects of actuarial science). And finally, Eric Gilleland and Mathieu Ribatet wrote the chapter on Reinsurance and Extremal Events (both of them work on climate and extreme values, and they did publish a very interesting software review for extreme value analysis a few years ago).

  • Life Insurance

For the section on life insurance, I asked Giorgio Spedicato to write the chapter on Life Contingencies (Giorgio is the author of the lifecontingencies package). Then Heather Booth, Rob J. Hyndman, and Leonie Tickle agreed to write the chapter on Prospective Life Tables (here we have a great match, with a demographer, an actuary, and… Rob. Every one who studied time series knows Rob. He is the author of the amazing forecast package, as well as the demography package, among many others. And he has a great blog too). To go further, Julien Tomas and Frédéric Planchet wrote the chapter on Prospective Mortality Tables and Portfolio Experience (both of them published the ELT – experience life tables – package a few months ago). And finally, there is a chapter on Survival Analysis by Frédéric Planchet  and Pierre-E. Thérond (they did publish a book – in French – on survival analysis for actuarial science, with examples in R).

  • Finance

For the section on financial computations, Yohan Chalabi and Diethelm Würtz wrote two chapters, one on Stock Prices and Time Series and one on Portfolio Allocation (both of them have worked on the Rmetrics project, with the timeSeriesfArmafGarchfPortfolio packages). And Sergio S. Guirreri wrote a chapter on Yield Curves and Interest Rates Models (Sergio is the author of the YieldCurve package).

  • Non-Life Insurance

Last, but not least, there is a section on non-life insuranceJean-Philippe Boucher (who published several articles on counts models) and myself, wrote the chapter on General Insurance Pricing. Then, I asked Katrien Antonio, Peng Shi, and Frank van Berkum to go further, with a chapter on Longitudinal Data and Experience Rating (I know Katrien from my PhD, and she was already working on that topic by that time… she did publish great surveys on that topic). And finally, Claims Reserving and IBNR is a chapter I wanted to write, because it’s a topic I love, but I asked Markus Gesmann to write it (Markus is known not only for his googleVis package, but also for the ChainLadder package – not to mention his awesome blog).

I will try to post some additional material on this blog, with R code (of course), graphs, and slides. And probably some pdfs with answers for the exercises. And all the datasets will be available in a CASdatasets package (online soon).