Tag Archives: Ewen

From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration

Our paper From Uncertainty to Precision: Enhancing Binary Classifier Performance through Calibration, written with Agathe Fernandes Machadoa, Emmanuel Flachaire, Ewen Gallic and François Hu is now online on ArXiv,

The assessment of binary classifier performance traditionally centers on discriminative ability using metrics, such as accuracy. However, these metrics often disregard the model’s inherent uncertainty, especially when dealing with sensitive decision-making domains, such as finance or healthcare. Given that model-predicted scores are commonly seen as event probabilities, calibration is crucial for accurate interpretation. In our study, we analyze the sensitivity of various calibration measures to score distortions and introduce a refined metric, the Local Calibration Score. Comparing recalibration methods, we advocate for local regressions, emphasizing their dual role as effective recalibration tools and facilitators of smoother visualizations. We apply these findings in a real-world scenario using Random Forest classifier and regressor to predict credit default while simultaneously measuring calibration during performance optimization.

Modeling Joint Lives within Families

With Olivier Cabrignac and Ewen Gallic, we recently uploaded a research paper, entitled “Modeling Joint Lives within Families

Family history is usually seen as a significant factor insurance companies look at when applying for a life insurance policy. Where it is used, family history of cardiovascular diseases, death by cancer, or family history of high blood pressure and diabetes could result in higher premiums or no coverage at all. In this article, we use massive (historical) data to study dependencies between life length within families. If joint life contracts (between a husband and a wife) have been long studied in actuarial literature, little is known about child and parents dependencies. We illustrate those dependencies using 19th century family trees in France, and quantify implications in annuities computations. For parents and children, we observe a modest but significant positive association between life lengths. It yields different estimates for remaining life expectancy, present values of annuities, or whole life insurance guarantee, given information about the parents (such as the number of parents alive). A similar but weaker pattern is observed when using information on grandparents.

The paper is online on https://arxiv.org/abs/2006.08446.

Insurance data science : Pictures

At the Summer School of the Swiss Association of Actuaries, in Lausanne, following the part of Jean-Philippe Boucher (UQAM) on telematic data, I will start talking about pictures this Wednesday. Slides are available online

Ewen Gallic (AMSE) will present a tutorial on satellite pictures, and a simple classification problem, related to Alzeimher detection.

We will try to identify what is on the following pictures, starting with the car

(we will see that the car is indeed identified)

We will also discuss previous pictures from the summer school

Insurance data science : use and value of unusual data #1

Next week, with , I will be at the Summer School of the Swiss Association of Actuaries, in Lausanne, with Jean-Philippe Boucher (UQAM) and Ewen Gallic (AMSE).

I will give an introductionary talk on Monday morning, and the slides are now available

There will be some hands-on applications, on R. I will share some codes in the slides.

Démographie historique à l’aide de données généalogiques participatives

Voilà plusieurs mois qu’avec Ewen Gallic on travaille sur des données généalogiques. Le premier papier, Étude de la démographie française du XIXe siècle à partir de données collaboratives de généalogie est fini. Il s’agit d’une note métodologique, décrisant comment on a reconstitué les arbres de ces 2,45 millions de personnes (701 millions d’enregistrements dans lesquels il a fallu faire du ménage), correspondant aux descendants sur 3 générations de personnes nées en France, entre 1800 et 1804.

Pour illustrer l’apport de ces données riches, on a commencé par étudier la mortaiité au cours du XIXème siècle

et noté que, certes, on sous-estime la mortalité des moins de 20 ans, et des personnes très âgées, mais globalement, nos donnéees sont conformes à ce que nous attendions (peut être moins sur la natalité). On a également commencé à étudier la migration, de génération en génération

(ici la proportion de descendants nés dans le même département que leur aieux). Plein d’autres résultats, à lire dans le papier, en ligne sur hal et beaucoup d’autres résultats sur la page github créée par Ewen.

Shapefiles from Isodensity Curves

Recently, with @3wen, we wanted to play with isodensity curves. The problem is that it is difficult to get – numerically – the equation of the contour (even if we can easily plot it). Consider the following surface (just for fun, in order to illustrate the idea)

> f=function(x,y) x*y+(1-x)*(1-y)
> u=v=seq(0,1,length=21)
> v=seq(0,1,length=11)
> f=outer(u,v,f)
> persp(u,v,f,theta=angle,phi=10,box=TRUE,
+ shade=TRUE,ticktype="detailed",xlab="",
+ ylab="",zlab="",col="yellow")

For instance, assume that we want to locate areas where the density exceed 0.7 (here in the lower left corner, SW, and the upper right corner, NE)

> image(u,v,f)
> contour(u,v,f,add=TRUE,levels=.7)

Continue reading Shapefiles from Isodensity Curves

Kernel Density Estimation with Ripley’s Circumferential Correction

The revised version of the paper Kernel Density Estimation with Ripley’s Circumferential Correction is now online, on hal.archives-ouvertes.fr/.

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and difficult to implement. We provide a simple technique — based of properties of Gaussian kernels — to efficiently compute weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. We illustrate the use of that technique to visualize hot spots of car accidents and campsite locations, as well as location of bike thefts.

There are new applications, and new graphs, too

Most of the codes can be found on https://github.com/ripleyCorr/Kernel_density_ripley (as well as datasets).

Date of death, birthday and Elvis Presley

10 days ago, a study published on http://www.annalsofepidemiology.org/ mentioned that “Death has a preference for birthdays” (as claimed in the title). The conclusion of the paper is that, in general, birthdays do not evoke a postponement mechanism but appear to end up in a lethal way more frequently than expected (“anniversary reaction”). Well, this is not new, and several previous articles have mentioned that point, e.g. Angermeyer et al. (1987).

I found the idea interesting since in demography, there is a large literature trying to extrapolate death rates from discrete to continuous time. Extrapolation are usually extremely smooth. But none of them integrate that aspect of mortality precisely on the birthday. The problem is that it is rather difficult to say something since datasets with individual observations are rare, online.

But yesterday, @coulmont sent me a tweet mentioning a website. I do not know if this is legal (even if some explanations are given), but I will mention courtesy of http://ssdmf.info/. It is a so-called Social Security Death Master File, containing individual informations about deaths in the US, as well as geographic information (as described on http://www.ssa.gov/), for people having a social security number.

With R, it is possible to work on those files (even they are huge, with tens of millions observations). For instance, we can check who is inside.

> elvis=scan("ssdm2",skip=22371720,n=1,what="character",sep=",")
> elvis
[1] " 409522002PRESLEY         ELVIS     0800197701081935  "

If you believe that Elvis is dead, you might agree that this database can be accurate (or at least, not too bad). And further, we can see here how to read the result: Elvis was born on January 8, 1935 (8 last digits), and died on August 16, 1977 (8 digits before). Obviously here, there are some problems with the dataset (we do not have the day of the death of Elvis). So here, we remove all the observations that do not give us proper dates. Then, the idea is to assume that the person died in 2000 (or any year since the point is to focus on days and months). Then, we count the number of days between the day of death and the birthday in 2001 (that would have been after) and the one in 2000 (that was either before or after the death), so that we can derive the number of days after the birthday,

dates=substr(base,66,81)
death=as.Date(substr(dates,1,8),"%m%d%Y")
birth=as.Date(substr(dates,9,16),"%m%d%Y")
indice=is.na(death)|is.na(birth)
mean(indice)
mdeath=substr(dates,1,2)
ddeath=substr(dates,3,4)
mbirth=substr(dates,9,10)
dbirth=substr(dates,11,12)
indice=which(ddeath!="00")
birth1=as.Date(paste(mbirth[indice],
dbirth[indice],"2000",sep=""),"%m%d%Y")
birth2=as.Date(paste(mbirth[indice],
dbirth[indice],"2001",sep=""),"%m%d%Y")
death=as.Date(paste(mdeath[indice],ddeath[indice],
"2000",sep=""),"%m%d%Y")
k=length(indice)
diffday=cbind((as.numeric(death-birth1))[1:k],
(as.numeric(death-birth2))[1:k])
DIFF=apply(diffday,1,function(x) {min(x[x>=0])})

What we have here is the number of days following the previous birthday. If we look at the distribution of that number of days, we obtain

counts=table(DIFF)
plot(as.numeric(names(counts)),
as.numeric(counts))
counts["0"]/(mean(counts[100:200]))
> counts["0"]/(mean(counts[100:200]))
0
1.121261

Thus, the death excess on the day of birth was around 12%, which is rather close to the one obtained from the Swiss mortality statistics 1969–2008 (in Ajdacic-Gross et al. (2012)). Note that here, we just play with a small subset of the entire dataset,

That database is probably extremely interesting, except that it suffers a huge selection bias, since only dead people are in that database. So it might be useless if we wish to study life expectancy of people named Bill versus people named Georges (that was something I wanted to investigate initially). But we’ll see what else we can do with it (since Ewen have been able to write some code to go through that huge dataset).