The initial report was published almost one year ago (https://www.institutlouisbachelier.org/…). It took me one more year of additional work to get real textbook…
(much more soon… now I need a break…)
The initial report was published almost one year ago (https://www.institutlouisbachelier.org/…). It took me one more year of additional work to get real textbook…
(much more soon… now I need a break…)
Our paper optimal vaccination policy to prevent endemicity: a stochastic model, written with Félix Foutel-Rodier and Hélène Guérin is now available on ArXiv,
We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity, is introduced, considering stochasticity at the individual level. By letting the population size to go to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model’s equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an endemic state with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions fair vaccine allocation is the optimal strategy to prevent endemicity.
Our new paper, with François Hu and Philipp Ratz, Mitigating Discrimination in Insurance with Wasserstein Barycenters is now available on ArXiv.
The insurance industry is heavily reliant on predictions of risks based on characteristics of potential customers. Although the use of said models is common, researchers have long pointed out that such practices perpetuate discrimination based on sensitive features such as gender or race. Given that such discrimination can often be attributed to historical data biases, an elimination or at least mitigation is desirable. With the shift from more traditional models to machine-learning based predictions, calls for greater mitigation have grown anew, as simply excluding sensitive variables in the pricing process can be shown to be ineffective. In this article, we first investigate why predictions are a necessity within the industry and why correcting biases is not as straightforward as simply identifying a sensitive variable. We then propose to ease the biases through the use of Wasserstein barycenters instead of simple scaling. To demonstrate the effects and effectiveness of the approach we employ it on real data and discuss its implications.
(fictitious maps used in the article)
Our new paper, with François Hu and Philipp Ratz, Fairness in Multi-Task Learning via Wasserstein Barycenters, is now available.
Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data. Recent advances have proposed various methods to ensure fairness in a univariate environment, where the goal is to de-bias a single task. However, extending fairness to a multi-task setting, where more than one objective is optimised using a shared representation, remains underexplored. To bridge this gap, we develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters. Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks. We develop a data-driven estimation procedure for the solution and run numerical experiments on both synthetic and real datasets. The empirical results highlight the practical value of our post-processing methodology in promoting fair decision-making.
It will be presented in September, at the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD 2023), in Torino.
L’autre jour, j’avais discuté Francesca Argiroffo, de RTS, au sujet de la décision de All State et State Farm, de ne plus assurer de nouveaux propriétaires de maisons ou locaux commerciaux, en Californie (et les liens entre assurance et changement climatique. Tout est en ligne, dans un article intitulé quand les assurances n’assurent plus, un autre effet du changement climatique (avec aussi une version audio, mais de mauvaise qualité…)
Next week, the Insurance Data Science Conference will take place at the Bayes Business School. The conference programme and abstract booklet are now online. Philipp and Olivier will present some recent work (unfortunately, I will not be able to attend, I was in already in London last month)
Olivier will present some recent work on Causal Inference and Fairness in Insurance Pricing
![]() |
![]() |
This Sunday, I will be presenting at the Actuarial Science Workshop, a 2023 SSC (Statistical Society of Canada) Annual Meeting in Ottawa. It is organized by Jun Cai (Waterloo), Ben Feng (Waterloo), Emiliano Valdez (University of Connecticut) and Xiaofei Shi (UofT) will also be there. It will be on optimal transport, in the context of fairness and discrimination, in insurance. Slides are now available.
![]() |
![]() |
This week, I will be working in Waterloo, and give a talk on on Causal Inference and Counterfactuals with Optimal Transport, With Applications in Fairness and Discrimination (based on our paper on Optimal Transport for Counterfactual Estimation, with Emmanuel Flachaire and Ewen Gallic, as well as a more recent one on Parametric Fair Projection with Statistical Guarantees, with François Hu and Philipp Ratz).
After this week in the UK, I am on my way to Brussels, before going to Leuven, visiting friends and colleagues.
![]() |
![]() |
Time to go to Leuven, for a workshop Foundations and Applications of Decentralized Risk Sharing. I will give a talk on Risk sharing on Irregular Networks, based on recent work with Philipp Ratz.
After a couple of days in London, I will spend some time in Oxford, working with Félix Foutel-Rodier, who was last year with us in Montréal as a postdoc fellow, working on mathematical models of pandemic (we plan to finalize our paper, and upload it on arxiv very soon)
Tomorrow afternoon, I will give a talk at the Bayes Business School (City University, London, UK), on Fairness and Discrimination in Actuarial Predictive Models using Optimal Transport. Slides are now online.
This week, Sam will be in Valencia (Spain) to present our work on Data Augmentation for Imbaladed Regression
In this work, we consider the problem of imbalanced data in a regression framework when the imbalanced phenomenon concerns continuous or discrete covariates. Such a situation can lead to biases in the estimates. In this case, we propose a data augmentation algorithm that combines a weighted resampling (WR) and a data augmentation (DA) procedure. In a first step, the DA procedure permits exploring a wider support than the initial one. In a second step, the WR method drives the exogenous distribution to a target one. We discuss the choice of the DA procedure through a numerical study that illustrates the advantages of this approach. Finally, an actuarial application is studied.
Our paper, Alternative fixed-effects panel model using weighted asymmetric least squares regression, with Amadou and Karim, is now published by Statistical Methods & Applications.
A fixed-effects model estimates the regressor effects on the mean of the response, which is inadequate to account for heteroscedasticity. In this paper, we adapt the asymmetric least squares (expectile) regression to the fixed-effects panel model and propose a new model: expectile regression with fixed effects (ERFE). The ERFE model applies the within transformation strategy to solve the incidental parameter problem and estimates the regressor effects on the expectiles of the response distribution. The ERFE model captures the data heteroscedasticity and eliminates any bias resulting from the correlation between the regressors and the omitted factors. We derive the asymptotic properties of the ERFE estimators and suggest robust estimators of its covariance matrix. Our simulations show that the ERFE estimator is unbiased and outperforms its competitors. Our real data analysis shows its ability to capture data heteroscedasticity
More online… doi:10.1007/s10260-023-00692-3
Just some simple codes to illustrate some points we will discuss this week, for the last course on GLMs, before the final exam. We have mentioned that the Gamma distribution belongs to the exponential, so we can run a regression, and compute the associated AIC,
> set.seed(123) > test.data = rgamma(n=2000, scale=1, shape=1) > m1 = glm( test.data~1, family=Gamma(link=log)) > AIC(m1) [1] 3997.332
The Gamma distribution is also a special case of the Tweedie distribution, with power 2
> library(statmod) > library(tweedie) > m2 = glm( test.data~1, family=tweedie(link.power=0, var.power=2) ) > AIC(m2) [1] NA
Unfortunately, we cannot compute the AIC, and we need a trick (with the appropriate R function).
> AICtweedie(m2) [1] 3997.332
Of course, we can do the same with the Poisson distribution, which also belongs to the exponential family
> test.data = rpois(n=2000, lambda=1) > m3 = glm( test.data~1, family=poisson(link=log)) > m4 = glm( test.data~1, family=tweedie(link.power=0, var.power=1) ) > AIC(m3) [1] 5124.61
Here, we have a problem with the AICtweedie function
> AICtweedie(m4) [1] Inf
because we need to specify the dispersion parameter
> AICtweedie(m4, dispersion=1) [1] 5124.61
We can now check: we generate some Gamma sample, and fit various Tweedie distribution, changing simply the variance function (which is a power function)
> set.seed(123) > test.data = rgamma(n=2000, scale=1, shape=1) > glmtw = function(t){ + m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) ) + d = NULL + if(t == 1) d = 1 + AICtweedie(m1, dispersion = d) + + } > vt = seq(1,2.7,length=100) > vg = Vectorize(glmtw)(vt) > plot(vt,vg,log="y",type="l")
The minimum of the AIC is close to 2, corresponding to the Gamma distribution
We can also try with a Poisson
> set.seed(123) > test.data = rpois(n=2000, lambda=1) > glmtw = function(t){ + m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) ) + d = NULL + if(t == 1) d = 1 + AICtweedie(m1, dispersion = d) + + } > vt = seq(1,2,length=100) > vg = Vectorize(glmtw)(vt) > plot(vt,vg,log="y",type="l")
The minimum is now close to 1, corresponding to the Poisson distriubtion (the variance is equal to the average)
Let us now try some compound Poisson distribution,
> rcpd=function(n,lambda,shape,scale){ + N=rpois(n,lambda) + X=rgamma(sum(N),shape=shape, scale=scale) + I=as.factor(rep(1:n,N)) + S=tapply(X,I,sum) + V=as.numeric(S[as.character(1:n)]) + V[is.na(V)]=0 + return(V)}
Let us generate some compound Poisson random variables, with Poisson distribution with average 1, and with Gamma jumps, with average and variance 1,
> set.seed(123) > test.data = rcpd(n=2000, 1,1,1) > glmtw = function(t){ + m1 = glm( test.data~1, family=tweedie(link.power=0, var.power=t) ) + d = NULL + if(t == 1) d = 1 + AICtweedie(m1, dispersion = d) + } > vt = seq(1.1,1.9,length=100) > vg = Vectorize(glmtw)(vt) > plot(vt,vg,log="y",type="l")
The optimal value for the power function is here 1.5, based on the AIC (relationships between Tweedie parameters and the compound Poisson ones are given in the slides)
We can now play a little bit with the variance of the jumps: they still have aveage 1, but they now have a smaller variance
> set.seed(123) > test.data = rcpd(n=2000, 1,3,1/3) > vt = seq(1.05,1.95,length=100) > vg = Vectorize(glmtw)(vt) > plot(vt,vg,log="y",type="l")
The optimal power for the Tweedie is closer to one, closer to the Poison case
while if we increase the variance of the jumps
> set.seed(123) > test.data = rcpd(n=2000, 1,1/3,3) > vt = seq(1.05,1.95,length=100) > vg = Vectorize(glmtw)(vt) > plot(vt,vg,log="y",type="l")
the optimal power is higher, closer to the Gamma distribution.