Tag Archives: Rényi

Fairness and discrimination, PhD Course, #3 Machine Learning, losses and distances

For the third course, we will get back a little bit on machine learning (slides are still online on the github repository). The starting point will be loss functions and risk.

Loss functions and risk

A general definition for a loss is that it is positive, and null when we consider \ell(y,y). As we will discuss further, it is neither a distance, nor a dissimilarity measure

Then, define the empirical risk (and the associated empirical risk minimization principle, as coined in Vapnik (1991))

Given a loss \ell and some probabilistic space, define the optimal decision, also called Bayes decision rule

And instead of the risk of a model, define the excess of risk.

A classical loss for a classifier is \ell_{0/1},

In that case, Bayes decision rule, ism^\star(\boldsymbol{x}) = \boldsymbol{1}(\mu(\boldsymbol{x})>1/2) =\begin{cases}1 \text{ if }\mu(\boldsymbol{x})>1/2\\0 \text{ if }\mu(\boldsymbol{x})\leq1/2\\\end{cases}where (of course), one needs to know \mu, otherwise, we can consider some plug-in estimator based on \widehat\mu. For continuous variables y, consider the quadratic loss \ell_2,

In that case, Bayes decision rule (the optimal model) is the conditional expectation

Observe that we can also define the quantile loss (or the expectile loss)

Observe that this loss is not symmetric..

From loss functions to distances

Let us discuss a bit more the fact that losses are not distances. As mentioned, it is neither necessarily symmetric nor seperable,

But furthermore, it has no reason to satisfy the triangle inequality. Actually, if d is the distance, it is very likely that d^2 is not (since exponentiating is not a subadditive transformation)

Another related concept could be the concept of similarity, or dissimilarity.

Another one is the concept of divergence, that we will use much more. For instance, Bregman divergence is

which safisfies desirables properties.

Interestingly it is possible to define “projections” even if we have neither an orthogonal projection (since there is no orthogonal concept since there is no inner-product), nor a distance. But still

One can use a nice algorithm to estimate that quantity, if the convex set can we expressed simply

When considering “distances” between distributions, instead of y‘s, among other interesting properties in statistics, we can mention the one of unbiased gradients,

and Müller (1997) defined integral probability metrics

Standard “distances” between distributions

The first one will be Hellinger distance

that can lead so simple expressions for standard parametric distributions, such as Beta distributions,

or (multivariate) Gaussian ones

We can also mention Pearson divergence

More interesting (and popular in probability theory), total variation

There are several ways to express that distance.

If instead of general sets \mathcal{A}, we can consider half lines, (-\infty,t][\latex], and we obtain Kolmogorov distance (or Kolmogorov-Smirnov)

Another important one in statistics is Kullback–Leibler divergence

For instance, with Gaussian vectors

Observe that the measure is actually a dissimilarity measure

If we want a symmetric version, we can consider Jeffreys divergence

or Jensen–Shannon divergence

Finally, we will mention f-divergence

and Rényi divergence

We will discuss a little bit more those "distances" (yes, I usually use that term, abusively) and next week, we will present the most interesting distance, that will be Wasserstein's.

Independence and correlation

A short post to get back on a property I gave briefly in the MAT8595 class in January, and again in the MAT8181 class this week (to illustrate the distinction between weak and strong white noises). Recall that (real-valued) random variables  and  are independent if for all , Another characterization, for integrable variable is that for all , which can be written, if variables are square integrable The idea to prove this characterization is to observe that if  and  are independent can be written Using a standard argument in integration theory, equality is valid for step functions (not only indicators), and then to positive measurable functions, and finally to integrable functions. Proving this result is not that difficult. Observe that Rényi (1959) – inspired by Gebelein (1947) – followed by Sarmanov (1958) introduced the concept of maximal correlation, that can be related to this result, where the maximum is taken over all functions  and  such that the correlation exist. Actually, it is possible to consider only transformations such that  and  (and similarly for , the idea is that we simple center and scale, which does not impact the correlation.Thus,  and  are independent if and only if Algorithm to estimate that coefficient are interesting. The problem can be written, equivalently And if the minimization is considered over , assuming that  is fixed, then the optimal transformation is And similarly for . So using an iterative algorithm, it is possible to get  and  (see Breiman and Friedman (1985) for more details). Actually, those functions appear in nonlinear canonical analysis. As mentioned in Lancaster (1957), for a Gaussian random vector  and in that case   and  are affine functions. This can be related to Hermite’s polynomial and to the expansion of the bivariate Gaussian density. I still hope that someone will go further for the project in the MAT8181 course.

Encore une année record…

Comme l’a noté la blogosphère ces dernières heures, il semble que des records aient (encore) été battus en terme de catastrophes naturelles: “catastrophes naturelles : une année 2010 exceptionnelle” (ici) ou “2010, année record des catastrophes naturelles” (). Maintenant on peut remonter un peu dans le temps aussi: “catastrophes naturelles: 2008 bat des records” (ici) ou “2008 a battu des records de catastrophes naturelles” (), “en 2007, les catastrophes naturelles ont atteint un nombre record” (ici), “catastrophes naturelles : 2005 bat des records en matière d’intensité et de coûts” (ici) ou “les catastrophes ont entraîné une facture record en 2005” (), “déjà record, le bilan 2004 des catastrophes naturelles s’alourdit” (ici), “année 2000 : record de catastrophes naturelles” (ici) et “catastrophes naturelles : encore une année record” (en 2000, ) etc, etc.

Bon, en tant que statisticien, les rares résultats que je connaisse sur les records sont des théorèmes en log (i.e. en variation lente). Si http://freakonometrics.hypotheses.org/files/2015/12/record01.png est une suite de variables i.i.d. et que l’on définie les durées entre records,

http://freakonometrics.hypotheses.org/files/2015/12/records02.png

http://freakonometrics.hypotheses.org/files/2015/12/record03.png, avec http://freakonometrics.hypotheses.org/files/2015/12/record04.png. Alfred Rényi a montré que

http://freakonometrics.hypotheses.org/files/2015/12/record08.png

alors que Marcel Neuts a montré que

http://freakonometrics.hypotheses.org/files/2015/12/record09.png

En particulier, on note que

http://freakonometrics.hypotheses.org/files/2015/12/record11.png

Autrement dit, plus le temps avance, plus les records doivent devenir rares.
Par exemple, regardons la simulation suivante. Considérons un processus Poissonnien homogène, http://freakonometrics.hypotheses.org/files/2015/12/catnat01.png, avec les dates des catastrophes, et http://freakonometrics.hypotheses.org/files/2015/12/catnat02.png leur coût.

On note

http://freakonometrics.hypotheses.org/files/2015/12/catnat04.png

le nombre annuel de catastrophes (éventuellement, on ne considère que les catastrophes dont le cout dépasse un seuil prédéfini, comme cela semble généralement être le cas).

On peut définir le processus de records,

http://freakonometrics.hypotheses.org/files/2015/12/catnat07.png

qui ressemble à ça

Bon, maintenant, si on a des records tous les ans (ou presque) c’est probablement que l’hypothèse i.i.d. n’était pas valide.
Supposons par exemple qu’il y ait de l’inflation, i.e. les coût sont http://freakonometrics.hypotheses.org/files/2015/12/catnat03.png.

Dans ce cas, le processus de nombre d’événements annuels est

et celui des records

Si on répète l’expérience un grand nombre de fois, le nombre moyen d’événements par année me cesse de croître (en rouge ci-dessous)

et le processus de records moyen (avec le cas i.i.d. en bleu),

Autrement dit, si l’inflation est trop importante, il est possible d’avoir régulièrement de nouveaux records.
Une autre possibilité est de supposer qu’il n’y a pas d’inflation, mais que le nombre de sinistres ne cesse de croître (ici un processus de Poisson à intensité linéaire), c’est à dire une fréquence plus importante des événements climatiques extrêmes

Le nombre annuel de catastrophes est alors

soit, si l’on considère le processus de records

Là encore, si l’on répète l’expérience un grand nombre de fois, le nombre moyen de catastrophes ne cesse de croître

alors que le processus de records a la moyenne suivante

Bref, il est possible d’avoir effectivement des records presque tous les ans. Cela signifie probablement que les sinistres coûtent de plus en plus cher (hausse de la valeur à risque), mais aussi probablement que les catastrophes sont de plus en plus fréquentes. On retrouve finalement ce que je disais il y a quelques mois maintenant (ici).