Tag Archives: conditional

Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness

Our paper “Sequential Conditional Transport on Probabilistic Graphs for Interpretable Counterfactual Fairness“, written with Agathe Fernandes Machado and Ewen Gallic, is now online

In this paper, we link two existing approaches to derive counterfactuals: adaptations based on a causal graph, as suggested in Plečko and Meinshausen (2020) and optimal transport, as in De Lara et al. (2024). We extend “Knothe’s rearrangement” Bonnotte (2013) and “triangular transport” Zech and Marzouk (2022) to probabilistic graphical models, and use this counterfactual approach, referred to as sequential transport, to discuss individual fairness. After establishing the theoretical foundations of the proposed method, we demonstrate its application through numerical experiments on both synthetic and real datasets.

Inference and autoregressive processes

Consider a (stationary) autoregressive process, say of order 2,

for some white noise  with variance . Here is a code to generate such a process,

> phi1=.5
> phi2=-.4
> sigma=1.5
> set.seed(1)
> n=240
> WN=rnorm(n,sd=sigma)
> Z=rep(NA,n)
> Z[1:2]=rnorm(2,0,1)
> for(t in 3:n){Z[t]=phi1*Z[t-1]+phi2*Z[t-2]+WN[t]}

Here, we have to estimate two sets of parameters: the autoregressive coefficients, and the variance of the innovation process . There are (at least) three techniques to estimate those parameters.

  • using least square regression

A natural idea is to see here a regression model, and thus, if we consider a matrix formulation,

Here we can run (conditional) ordinary least squares estimation,

> base=data.frame(Y=Z[3:n],X1=Z[2:(n-1)],X2=Z[1:(n-2)])
> regression=lm(Y~0+X1+X2,data=base)
> summary(regression)

Call:
lm(formula = Y ~ 0 + X1 + X2, data = base)

Residuals:
Min      1Q  Median      3Q     Max
-4.3491 -0.8890 -0.0762  0.9601  3.6105

Coefficients:
Estimate Std. Error t value Pr(>|t|)
X1  0.45107    0.05924   7.615 6.34e-13 ***
X2 -0.41454    0.05924  -6.998 2.67e-11 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.449 on 236 degrees of freedom
Multiple R-squared: 0.2561,	Adjusted R-squared: 0.2497
F-statistic: 40.61 on 2 and 236 DF,  p-value: 6.949e-16

> regression$coefficients
X1         X2
0.4510703 -0.4145365
> summary(regression)$sigma
[1] 1.449276
  • using Yule-Walker equations

As we’ve seen in class, we can easily get the following equations for the autocovariance functions,

which can also be written

So we just have to solve a simple linear system of equations. Note that if we divide by the variance, those equations can be written in terms of the autocorrelation functions

The code is the following

> rho1=cor(Z[1:(n-1)],Z[2:n])
> rho2=cor(Z[1:(n-2)],Z[3:n])
> A=matrix(c(1,rho1,rho1,1),2,2)
> b=matrix(c(rho1,rho2),2,1)
> (PHI=solve(A,b))
[,1]
[1,]  0.4517579
[2,] -0.4155920

Now, we need to extract the estimated innovation process, from this set of parameters (note that it could be possible to include the variance term in Yule-Walker equations, to get a three dimensional linear equation)

> estWN=base$Y-(PHI[1]*base$X1+PHI[2]*base$X2)
> sd(estWN)
[1] 1.445706

This estimator is probably not the best one (we can take into account that we’ve lost two degrees of freedom), but as a starting point, let us consider this one.

  • using (conditional) likelihood estimators

Finally, we can assume some distribution for the innovation process. Thestandard model is a Gaussian model, i.e.

In that case, the conditional log likelihood (conditional since we set the first two observations here) is

> CondLogLik=function(A,TS){
+ phi1=A[1];  phi2=A[2]
+ sigma=A[3]	; L=0
+ for(t in 3:length(TS)){
+ L=L+dnorm(TS[t],mean=phi1*TS[t-1]+
+ phi2*TS[t-2],sd=sigma,log=TRUE)}
+ return(-L)}

Now, we can run standard optimization procedures,

> LogL=function(A) CondLogLik(A,TS=Z)
> optim(c(0,0,1),LogL)
$par
[1]  0.4509685 -0.4144938  1.4430930

$value
[1] 425.0164

$counts
function gradient
88       NA

$convergence
[1] 0

$message
NULL

Here, our three estimators are rather close. Actually, if we generate 1,000 time series (of size 240), those are the Box-plots of our three estimators, for the first order autoregressive coefficient

for the second one,

and finally for the standard deviation of the innovation process

All those estimators behave nicely, and are rather close. Note that they all might be biased, but they are consistent (see Davidson and MacKinnon for instance, in their book, for more details).

Proving tautological versus trivial results in mathematics

There is something that might be fun in mathematics, which is the connexion between trivial, tautological and difficult questions. Sometimes, things are so intuitive, that they seem to be obvious. But mathematicians aren’t jedis, and they should not trust too much their intuition… I mean intuition is fine, but it is not a proof. It is like those standard results we learn in topology courses, e.g. “the closure of an open ball is not necessarily the closed ball”. The other thing is that after a while, you try to prove something, until someone makes you realize that it is the definition…

And this morning, while I was trying to make a coffee, @renaudjf came with a simple question (yes, it always starts like that). Consider the standard algorithm to generate a conditional random variable. Assume that  has a priori distribution , and that , given , has (conditional) distribution .

The standard idea is monte carlo simulation, to generate values of , is
  •  step 1: generate 
  •  step 2: given that generation of , generate 
“Can we prove that we actually generate from the (true, maybe hard to characterize) non-conditional distribution of  ? Or is it just trivial ?”. After having those previous philosophical questions, we came to the point that if it was trivial, then we should be able to prove it. A standard way of writing the algorithm is to use the quantile based technique
  •   with ,
  •   with ,
For instance, to generate negative binomial distribution
n=1
theta=rgama(n,3,3)
X=rpois(n,lambda=theta)
Thus, let  where  and  are two independent random variables with a uniform distribution on the unit interval. Let us try to derive its distribution, i.e.
so
if we consider the following change of variate 
which is exactly the non-conditional distribution of .
And then, you’re quite happy because you’ve been able to prove a trivial result ! But next time, I promise, we’ll try to derive an amazing theorem that will change humanity… but next time only, first, let us prove trivial results.

La Disparition, by George Pérec, and conditional probabilities

As mentioned here (about dust on keyboards), E is undoubtedly the most important letter in French… So, what else do we use if we do not use the E ?

Here is the empirical frequency we obtained on two books, by Mauss and Durkheim (which seem to be representative of what we usually observe)

So if we look at the conditional distribution we have without letter E, we obtain

But does this conditional distribution can be interpreted ? For instance, if on your keyboard E is broken, will the distribution of letters used be like that ? We might think that the answer is no, because if we cannot use a letter, we adapt ourselves, and try to use words that do not contain E.

Actually it is possible to see how we would write if we try to avoid letter E : in 1969, Georges Pérec published a book “la disparition” where the challenge was to write a novel (with a story inside) without using letter E. If we look at the distribution of letters in that book, we have

We can see that we still need to use vowels: if we look at the difference between Pérec, and the conditional distribution obtained from Mauss and Durkheim, we have

i.e. if we cannot used E then we use much more A or I or even U.It is clearly a substitution effect: the French language needs a constant dose of vowels. In Mauss and Durkheim, 37% of the letters are vowels (17% for E and 20% for A, I,O, U and Y). In Pérec, A, I, O, U and Y represent now 30% of the letters…