All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Actuarial Science Workshop on 2023 SSC Meeting in Ottawa

This Sunday, I will be presenting at the Actuarial Science Workshop, a 2023 SSC (Statistical Society of Canada) Annual Meeting in Ottawa. It is organized by Jun Cai (Waterloo), Ben Feng (Waterloo), Emiliano Valdez (University of Connecticut) and Xiaofei Shi (UofT) will also be there. It will be on optimal transport, in the context of fairness and discrimination, in insurance. Slides are now available.

Talk at the seminar, at Waterloo (Ontario, Canada)

This week, I will be working in Waterloo, and give a talk on on Causal Inference and Counterfactuals with Optimal Transport, With Applications in Fairness and Discrimination (based on our paper on Optimal Transport for Counterfactual Estimation, with Emmanuel Flachaire and Ewen Gallic, as well as a more recent one on Parametric Fair Projection with Statistical Guarantees, with François Hu and Philipp Ratz).

AISTATS 2023

This week, Sam will be in Valencia (Spain) to present our work on Data Augmentation for Imbaladed Regression

In this work, we consider the problem of imbalanced data in a regression framework when the imbalanced phenomenon concerns continuous or discrete covariates. Such a situation can lead to biases in the estimates. In this case, we propose a data augmentation algorithm that combines a weighted resampling (WR) and a data augmentation (DA) procedure. In a first step, the DA procedure permits exploring a wider support than the initial one. In a second step, the WR method drives the exogenous distribution to a target one. We discuss the choice of the DA procedure through a numerical study that illustrates the advantages of this approach. Finally, an actuarial application is studied.

Alternative fixed-effects panel model using weighted asymmetric least squares regression

Our paper, Alternative fixed-effects panel model using weighted asymmetric least squares regression, with Amadou and Karim, is now published by Statistical Methods & Applications.

A fixed-effects model estimates the regressor effects on the mean of the response, which is inadequate to account for heteroscedasticity. In this paper, we adapt the asymmetric least squares (expectile) regression to the fixed-effects panel model and propose a new model: expectile regression with fixed effects (ERFE). The ERFE model applies the within transformation strategy to solve the incidental parameter problem and estimates the regressor effects on the expectiles of the response distribution. The ERFE model captures the data heteroscedasticity and eliminates any bias resulting from the correlation between the regressors and the omitted factors. We derive the asymptotic properties of the ERFE estimators and suggest robust estimators of its covariance matrix. Our simulations show that the ERFE estimator is unbiased and outperforms its competitors. Our real data analysis shows its ability to capture data heteroscedasticity

More online… doi:10.1007/s10260-023-00692-3

Model selection, AIC and Tweedie regression

Just some simple codes to illustrate some points we will discuss this week, for the last course on GLMs, before the final exam.  We have mentioned that the Gamma distribution belongs to the exponential, so we can run a regression, and compute the associated AIC,

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> m1 = glm( test.data~1, family=Gamma(link=log))
> AIC(m1)
[1] 3997.332

The Gamma distribution is also a special case of the Tweedie distribution, with power 2

> library(statmod)
> library(tweedie)
> m2 = glm( test.data~1, family=tweedie(link.power=0, var.power=2) )
> AIC(m2)
[1] NA

Unfortunately, we cannot compute the AIC, and we need a trick (with the appropriate R function).

> AICtweedie(m2)
[1] 3997.332

Of course, we can do the same with the Poisson distribution, which also belongs to the exponential family

> test.data = rpois(n=2000, lambda=1)
> m3 = glm( test.data~1, family=poisson(link=log))
> m4 = glm( test.data~1, family=tweedie(link.power=0, var.power=1) )
> AIC(m3)
[1] 5124.61

Here, we have a problem with the AICtweedie function

> AICtweedie(m4)
[1] Inf

because we need to specify the dispersion parameter

> AICtweedie(m4, dispersion=1)
[1] 5124.61

We can now check: we generate some Gamma sample, and fit various Tweedie distribution, changing simply the variance function (which is a power function)

> set.seed(123)
> test.data = rgamma(n=2000, scale=1, shape=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2.7,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum of the AIC is close to 2, corresponding to the Gamma distribution

We can also try with a Poisson

> set.seed(123)
> test.data = rpois(n=2000, lambda=1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ 
+ }
> vt = seq(1,2,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The minimum is now close to 1, corresponding to the Poisson distriubtion (the variance is equal to the average)

Let us now try some compound Poisson distribution,

> rcpd=function(n,lambda,shape,scale){
+ N=rpois(n,lambda)
+ X=rgamma(sum(N),shape=shape, scale=scale)
+ I=as.factor(rep(1:n,N))
+ S=tapply(X,I,sum)
+ V=as.numeric(S[as.character(1:n)])
+ V[is.na(V)]=0
+ return(V)}

Let us generate some compound Poisson random variables, with Poisson distribution with average 1, and with Gamma jumps, with average and variance 1,

> set.seed(123)
> test.data = rcpd(n=2000, 1,1,1)
> glmtw = function(t){
+ m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) )
+ d = NULL
+ if(t == 1) d = 1
+ AICtweedie(m1, dispersion = d)
+ }
> vt = seq(1.1,1.9,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal value for the power function is here 1.5, based on the AIC (relationships between Tweedie parameters and the compound Poisson ones are given in the slides)

We can now play a little bit with the variance of the jumps: they still have aveage 1, but they now have a smaller variance

> set.seed(123)
> test.data = rcpd(n=2000, 1,3,1/3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

The optimal power for the Tweedie is closer to one, closer to the Poison case

while if we increase the variance of the jumps

> set.seed(123)
> test.data = rcpd(n=2000, 1,1/3,3)
> vt = seq(1.05,1.95,length=100)
> vg = Vectorize(glmtw)(vt)
> plot(vt,vg,log="y",type="l")

the optimal power is higher, closer to the Gamma distribution.

Quand on marche sur des œufs avec Chat-GPT

Avec la sortie de GPT4, et pour faire suite à mon article La société du bullshit, quelques nouvelles de GPT. Alors que je parlais de “bullshit”, on voit ressortir ces derniers temps l’idée d'”hallucinations”, définies proprement dans Hallucinations in Neural Machine Translation, voilà déjà 5 ans. Un récent article en reparlait cette semaine. Pour illustrer, inspiré de discussions avec Louis Abraham, lors de ma visite parisienne, j’ai tenté d’avoir des conseils culinaires

J’ai tenté d’autres types d’œufs, de lapin

ou de baleine,

Louis me faisait remarquer que GPT3 pouvait facilement être induit en erreur, contrairement à ChatGPT… alors j’ai tenté,

que ce soit des œufs de lapin ou de beluga, ChatGPT donne des conseils étranges

Et il ne s’arrête pas

je ne peux m’empêcher de partager cette petite conclusion

Sur le moment, je me suis demandé s’il n’était pas pris dans mon délire absurde…

J’ai même tenté les œufs de cochon

J’ai fini par lui poser la question franchement,

et c’était fini, il avait quitté mon délire, impossible  de jouer davantage….

Je me suis aussi amusé à poser des questions sur la connaissance des relations familiales. Et c’est assez facile de piéger GPT3

Impossible d’avoir un “je ne sais pas”. Ici, il va prendre le seule prénom féminin qui ait été mentionné… Bref, il n’y a aucune représentation de ce qu’une famille peut être, ce qui sera une étape indispensable pour que l’outil fonctionne bien.

Optimal Transport for Counterfactual Estimation: A Method for Causal Inference

For those who wish to reproduce the techniques proposed in our paper, Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, Ewen Gallic has put online some nice pages, with the application mentioned in the paper (both univariate and bivariate, including confidence intervals with bootstrap), as well as simpler examples, which I use in the slides, to present the method

http://egallic.fr/Recherche/Transport_Counterfactual/

I will present this work at the Bachelier Seminar, in Paris, at the end of the week. Slides are online here.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).