Tag Archives: PhD

Contribution of machine learning in modeling rare values and imbalanced data

This morning (Montréal time), Samuel Stocksieker defended his PhD thesis entitled “contribution of machine learning in modeling rare values and imbalanced data“. Cécile Capponi, Marianne Clausel, Julie Josse, Frédéric Planchet and  Anne Sabourin, Christian-Yann Robert and Stéphane Loisel were in the jury,

the work is structured around major axes: Imbalanced Features and Imbalanced Regression. The first axis addresses the issue of feature imbalance, that is, when it concerns the attributes and not the variable to be explained. The first solution involves adjusting the distribution of a continuous covariate relative to a given target distribution. It proposes to combine weighted resampling and synthetic data generators. This strategy notably allows to deal with selection bias: when the distribution of the covariate in the training sample is significantly different from that of the population. A second solution is proposed in the context of multi-supervised learning, particularly with autoencoders. It relies on a new metric aimed at balancing the influence of variables during learning and is applicable not only to supervised and unsupervised models, but also to generative models such as variational autoencoders. The second part deals with the issue of regression from imbalanced data. Various preprocessing solutions, including synthetic data generation, are proposed. Initially, we propose to explore the initial data space by introducing new generators and methodologies to address the specific case of regression. We then propose to immerse the data in a latent space in order to provide a more conducive framework for synthetic data generation.

Fairness and discrimination, PhD Course, #7 Sensitive attributes and proxies

In our previous post, we discussed “group fairness“. I might have gone a bit fast, and I decided to add some material about sensistive attributes, and proxies.

Sensitive attributes ?

Almost everywhere, we can find a list of variables that are considered, by law, as sensitive, since they will lead to discrimination. As mentioned earlier, sensitive variable might change with time, and accross regions…

Another issue with black boxes is that it might be hard to assess if they are related to sensitive attribute. In order to extract informations in pictures to classify pictures, or detect pictures, algorithm might use information that could be considered as sensitive. First, recall the popular wolf-husky classifier, that detects snow in the background (since wolf were with snow in the training sample)

This can also be the case for health issues, where classifiers can be influenced by the color of the skin (or possibly some unexpected information)

Racism

The first sensitive attribute is probably the race, that has been discussed in insurance for decaded.

One should keep in mind that race is a social information, and most of the time, it is based on self-identification

This leads to popular maps in the U.S.

Racism is usually related to “colourism” (discrimination based on skin tone)

Is it relevant in the context of insurance, and risk ?

It has been observed that African Americans, in the U.S. were usually asked a higher insurance premium.

Have in mind that discrimination has nothing to do with intention, as mentioned previous. An insurance pricing can be racist, without any intention to be so. An important issue to quantify that problem is actually to observe that variable.

Sexism

Sexism is another popular exemple of discrimination, related to sex, or gender.

Actuaries have been using life tables that are gender related for more than 300 years. And indeed, it seems that women live longer than men.

Ageism

Age is another possible sensitive attribute, but it is more complicated. First, it is not a “club” and second, it is (somehow) clearly related to risk.

In dataset, there can also be selection bias, related to the age. For instance, during the COVID pandemic, triage was based on the age of patients. Treatements and tests can be related to the age of patients. So this bias will probably have an impact on observed risks.

Genetics

Another important sensitive variable is related to “genetic information”.

Such information is usually classified as sensitive everywhere.

To conclude, I wanted to mentioned that several important variables considered as sensitive have not much to do with genetics, but more with a social construction.

Finally, let us discuss proxies that can be related to those sensitive variables.

Names and language

The first one was discussed in the introduction : names contain information about race and ethnical origin,

Text and discussion can also reveal sensitive information.

Pictures

Pictures can also provide information. That was discussed 150 years ago, where researchers tried to identify criminals using solely pictures.

Some insurers have been trying, at some point, to detect diseases on facial pictures. And it possible to reveal informations from pictures. Possibly the age, and the gender.

One can also use satellite pictures, or pictures from Google Street View, such as the wealth in the neighborhood. And possibly sensitive information, such as the presence of an access ramp for disabled people.

Credit Scoring

Credit scoring is also a variable used by insurers, that can be related to variables considered as sensitive

Clearly, a bad credit score will have a big impact not only on mortgages and loans,

but also on insurance rates ! As we explained here, it costs a lot to be poor.

Networks

Finally, insurance can use information related to friends, or family, to assess the risk. And netword data capture a lot of sensitive information.

We will talk a little bit about network, to explain why using your friends risks to assess your own risk might not be a great idea…

It is an extension of th friendship paradox.

Proxies

Finally, we will conclude by showing that removing a sensitive attribute from a training dataset will not mitigate discrimination.

Amsterdam, PhD defense

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-21-a%CC%80-09.14.31.png

Last week, I was involved in the PhD defense of Julien Tomas, with Rob Kaas (promotor), Frédéric Planchet (co-promotor), Katrien AntonioMarc Goovaerts, Ann De Schepper and Michel Vellekoop. The PhD thesis – untitled quantifying biometric life insurance risks with non-parametric smoothing methods – can be dowloaded on http://dare.uva.nl/… and on http://tel.archives-ouvertes.fr/.

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-21-a%CC%80-09.13.07.png

The R codes will be available soon on my blog (and on Julien’s new website http://www.likelihood.me/).

http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-21-a%CC%80-09.13.29.png http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-21-a%CC%80-09.15.01.png http://freakonometrics.hypotheses.org/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-21-a%CC%80-09.13.45.png

PhD defense on copulas

This Wednesday I will be at Université Paris 1 Sorbonne as a member of the jury of the PhD thesis of Pierre-André Maugis, on conditional correlation and vine copula.

Vine copulas were born in 2002 with thepaper of Tim Bedford and Roger M. CookeVines–a new graphical model for dependent random variables. The idea is to use the following decomposition for a multivariate density

(from Bayes formula, with synthetic notations). Then using the relationship between a bivariate density and its copula (density)

thus

Using again Bayes formula,

and we can write

Since  and , the previous expression becomes

or to stress on the most important part (as I see it)

It is common then to assume that this conditional copula does not depend on the conditioning parameter. The more detailed expression of that joint trivariate density is

The (parametric) inference algorithm is defined in Cooke, Joe and Aas (2010) as follows

The important assumption in vine copula models is that conditional copulas are constant. And this assumption might be relevant in some cases. For instance, in the Gaussian case (the observations have a Gaussian joint distribution – or at least copula – and we fit a vine model with Gaussian bivariate copulas).

The code to fit a vine copula is the following,

> library(CDVine)
> library(mnormt)
> SIGMA=matrix(c(1,.6,.7,.6,1,.8,.7,.8,1),3,3)
> X=rmnorm(n=100000,varcov=SIGMA)
> CDVineSeqEst(dat=X, family = c(1,1,1),
+ type = 1, method = "mle")
$par
[1] 0.6001505 0.7023699 0.6698215
 
$par2
[1] 0 0 0

Note that it is consistent with the following algorithm where conditional copulas are fitted. In the following, for all values of the given component, we wit a Gaussian copula for the conditional remaining pair,

> U=pnorm(X)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> GaussCop = normalCopula(param=.5, dim = 2)
> U1U2=U[,1:2]
> U1U3=U[,c(1,3)]
> fit12.mpl = fitCopula(GaussCop, U1U2, method="mpl")@estimate
> fit13.mpl = fitCopula(GaussCop, U1U3, method="mpl")@estimate
> fit12.mpl
[1] 0.5984932
> fit13.mpl
[1] 0.7005185
> fit23a=fit23b=rep(NA,99)
> for(i in 4:96){
+ x=i/100
+ C12=pcopula(normalCopula(param=fit12.mpl, dim = 2),U1U2)
+ C13=pcopula(normalCopula(param=fit13.mpl, dim = 2),U1U3)
+ U12=rank(C12)/(nrow(U)+1)
+ U13=rank(C13)/(nrow(U)+1)
+ U23=cbind(U12[abs(U[,1]-x)<.02],U13[abs(U[,1]-x)<.02])
+ V23=cbind(rank(U23[,1])/(nrow(U23)+1),
+ rank(U23[,2])/(nrow(U23)+1))
+ fit23.mpl = fitCopula(GaussCop, V23, method="mpl")@estimate
+ fit23a[i]=fit23.mpl
+ }
> plot(X,fit23a,col="red")

It looks like assuming the conditional copula as constant was a valid assumption here

But note that if the true distribution is not Gaussian, then assuming the conditional copula as constant is not valid anymore (here a trivariate Clayton copula was generated)

Copules et processus empiriques

Tarek Zari a soutenu sa thèse au début du mois, présentant une “contribution  à l’étude du processus empirique de copule“, et sa thèse est en ligne ici. Je mets aussi une copie des slides de la soutenance . Historiquement, il semble que Frits Ruymgaart a été le premier a parler de processus empirique de copules, en 1973 (sa thèse est en ligne ici).

Paul Deheuvels avait également introduit la notion en copule empirique dès 1979 sous le nom de “fonction de dépendance empirique“. A la même époque, Ludger Rüschendorf proposait également une étude asymptotique des processus empiriques de copules (ici en 1976), ou encore Gäenssler et Stute dans leur seminar on empirical processes et Winfried Stute dans les années 80 (). Une revue de la littérature sur les processus empiriques multivariés a été publié à cette époque, en ligne . Depuis Jean-David Fermanian a publié un papier ici sur la convergence faible, et Paul Deheuvels ou Ludger Rüschendorf ont publié énormément de choses, en particulier sur la vitesse de convergence…