Tag Archives: postprocessing

Fairness and discrimination, PhD Course, #10 Mitigation, Post-processing

For the last part, in our graduate course, we will discuss further mitigation, and after pre-processing and in-processing techniques, we will present post-processing ones. It simply means that we created a model, that could be seen as discriminatory. But besides that, it was a “good” model, so still want to use the predictions we obtained. Quite heuristically, in the context of binary sensitive attributes, we could agree that we got two sets of predictions, for the two sensitive attribute, and the fair model should probability “in between”. And that is a natural idea when dealing with convex objects, and it is related to averages, centroids or barycenters,

As mentioned above, there are several ways to defined such a quantity

but the most interesting one will be related to the one based on optimization.

Interestingly, this idea can be extended to more complexe objects than points, in some metric space, but more generally on distributions. And since we have seen several distances on the set of distributions, we can consider for instance the Wasserstein (2) barycenter, as in Agueh and Carlier (2011),

An interesting point, it that in the univariate setting, there is a simple connection with optimal transport, where averages of push-forward mapping are considered,

(even if it remains computationnaly difficult to get)

In our context, if we have a model, consider two scores, m(\boldsymbol{x},s=A) and m(\boldsymbol{x},s=B), on the two sensistive groups, and consider quite naturally the fair barycenter.

The heuristic interpretation is simple, and interstingly, using that new model m^\star(\boldsymbol{x}), individuals we be ordered the same way within each group. In the Gaussian case, it is also possible to compute those barycenters.

To do so, we need to define \boldsymbol{\Sigma}^t, for t\in[0,1], including some square root of \boldsymbol{\Sigma}, i.e. \boldsymbol{\Sigma}^{1/2}, that we expect to be symmetric. To do so, we need simply the exponential of matrices

and the logarithm

Then define the square root, for instance,

Here, we can prove that the barycenter of Gaussian vectors is Gaussian. The mean is the average of means, and a slightly more complex formula is used for the variance

based on matrix equations,

In dimension 2, we can write it more simply

Interestingly, the average is variances is larger than the variance of the barycenter. And a natural property can be obtained whem variances are diagonalizables in the same basis

Here we can illustrate iso-probabilities curves

More generally, use histograms

But it becomes hard to compute

An easier approach is to use numerical simulations and a kernel estimate to obtain a smooth density,

We can use it on pictures. For instance, we have several observations of a “3”, handwriten

We can compute there barycenter, and generate based on it

Quite naturally, once we have barycenters, we can consider geodesics

and apply it again to Gaussian vectors

On our dataset, with scores, we have

We can then compute the “barycenter score”, that will be, for people in group A

and people in group B

Consider now the score of three models, on the motor dataset

The score m^\star for people in group A is

while the score m^\star for people in group B is

We can now compare predictions for people in the two groups

Numerically, we obtain

So we are now able to mitigate unfair scores.