All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Lasso Regression (home made)

Again, this post is related to my MAT7381 course, where we will see that it is actually possible to write our own code to compute Lasso regression, \min\left\lbrace\frac{1}{2}\|\mathbf{y}-\mathbf{X}\mathbf{\beta}\|_{\ell_2}^2+\lambda\|\mathbf{\beta}\|_{\ell_1}\right\rbraceWe have to define the soft-thresholding functionS(z,\gamma)=\text{sign}(z)\cdot(|z|-\gamma)_+=\begin{cases}z-\gamma&\text{ if }\gamma>|z|\text{ and }z<0\\z+\gamma&\text{ if }\gamma<|z|\text{ and }z<0 \\0&\text{ if }\gamma\geq|z|\end{cases}The R function would be

soft_thresholding = function(x,a){
sign(x) * pmax(abs(x)-a,0)
}

To solve our optimization problem, set\mathbf{r}_j=\mathbf{y} - \left(\beta_0\mathbf{1}+\sum_{k\neq j}\beta_k\mathbf{x}_k\right)=\mathbf{y}-\widehat{\mathbf{y}}^{(j)}
so that the optimization problem can be written, equivalently
\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p [\mathbf{r}_j-\beta_j\mathbf{x}_j]^2+\lambda |\beta_j|\right\rbrace
hence\min\left\lbrace\frac{1}{2n}\sum_{j=1}^p \beta_j^2\|\mathbf{x}_j\|-2\beta_j\mathbf{r}_j^T\mathbf{x}_j+\lambda |\beta_j|\right\rbrace
and one gets
\beta_{j,\lambda} = \frac{1}{\|\mathbf{x}_j\|^2}S(\mathbf{r}_j^T\mathbf{x}_j,n\lambda)
or, if we develop
\beta_{j,\lambda} = \frac{1}{\sum_i x_{ij}^2}S\left(\sum_ix_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
Again, if there are weights \mathbf{\omega}=(\omega_i), the coordinate-wise update becomes
\beta_{j,\lambda,{\color{red}{\omega}}} = \frac{1}{\sum_i {\color{red}{\omega_i}}x_{ij}^2}S\left(\sum_i{\color{red}{\omega_i}}x_{i,j}[y_i-\widehat{y}_i^{(j)}],n\lambda\right)
The code to compute this componentwise descent is

lasso_coord_desc = function(X,y,beta,lambda,tol=1e-6,maxiter=1000){
  beta = as.matrix(beta)
  X = as.matrix(X)
  omega = rep(1/length(y),length(y))
  obj = numeric(length=(maxiter+1))
  betalist = list(length(maxiter+1))
  betalist[[1]] = beta
  beta0list = numeric(length(maxiter+1))
  beta0 = sum(y-X%*%beta)/(length(y))
  beta0list[1] = beta0
  for (j in 1:maxiter){
    for (k in 1:length(beta)){
      r = y - X[,-k]%*%beta[-k] - beta0*rep(1,length(y))
      beta[k] = (1/sum(omega*X[,k]^2))*
        soft_thresholding(t(omega*r)%*%X[,k],length(y)*lambda)
    }
    beta0 = sum(y-X%*%beta)/(length(y))
    beta0list[j+1] = beta0
    betalist[[j+1]] = beta
    obj[j] = (1/2)*(1/length(y))*norm(omega*(y - X%*%beta - 
           beta0*rep(1,length(y))),'F')^2 + lambda*sum(abs(beta))
    if (norm(rbind(beta0list[j],betalist[[j]]) - 
             rbind(beta0,beta),'F') &lt; tol) { break } 
  } 
  return(list(obj=obj[1:j],beta=beta,intercept=beta0)) }

For instance, consider the following (simple) dataset, with three covariates

chicago = read.table("http://freakonometrics.free.fr/chicago.txt",header=TRUE,sep=";")

that we can “normalize” (or “standardize“)

X = model.matrix(lm(Fire~.,data=chicago))[,2:4]
for(j in 1:3) X[,j] = (X[,j]-mean(X[,j]))/sd(X[,j])
y = chicago$Fire
y = (y-mean(y))/sd(y)

To initialize the algorithm, use the OLS estimate

beta_init = lm(Fire~0+.,data=chicago)$coef

For instance

lasso_coord_desc(X,y,beta_init,lambda=.001)
$obj
[1] 0.001014426 0.001008009 0.001009558 0.001011094 0.001011119 0.001011119
 
$beta
          [,1]
X_1  0.0000000
X_2  0.3836087
X_3 -0.5026137
 
$intercept
[1] 2.060999e-16

and we can get the standard Lasso plot by looping,

Quantile Regression (home made, part 2)

A few months ago, I posted a note with some home made codes for quantile regression… there was something odd on the output, but it was because there was a (small) mathematical problem in my equation. So since I should teach those tomorrow, let me fix them.

Median

Consider a sample \{y_1,\cdots,y_n\}. To compute the median, solve\min_\mu \left\lbrace\sum_{i=1}^n|y_i-\mu|\right\rbracewhich can be solved using linear programming techniques. More precisely, this problem is equivalent to\min_{\mu,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^na_i+b_i\right\rbracewith a_i,b_i\geq 0 and y_i-\mu=a_i-b_i, \forall i=1,\cdots,n. Heuristically, the idea is to write y_i=\mu+\varepsilon_i, and then define a_i‘s and b_i‘s so that \varepsilon_i=a_i-b_i and |\varepsilon_i|=a_i+b_i, i.e. a_i=(\varepsilon_i)_+=\max\lbrace0,\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i>0}andb_i=(-\varepsilon_i)_+=\max\lbrace0,-\varepsilon_i\rbrace=|\varepsilon|\cdot\boldsymbol{1}_{\varepsilon_i<0}denote respectively the positive and the negative parts.

Unfortunately (that was the error in my previous post), the expression of linear programs is\min_{\mathbf{z}}\left\lbrace\boldsymbol{c}^\top\mathbf{z}\right\rbrace\text{ s.t. }\boldsymbol{A}\mathbf{z}=\boldsymbol{b},\mathbf{z}\geq\boldsymbol{0}In the equation above, with the a_i‘s and b_i‘s, we’re not far away. Except that we have \mu\in\mathbb{R}, while it should be positive. So similarly, set \mu=\mu^+-\mu^- where \mu^+=(\mu)_+ and \mu^-=(-\mu)_+.

Thus, let\mathbf{z}=\big(\mu^+;\mu^-;\boldsymbol{a},\boldsymbol{b}\big)^\top\in\mathbb{R}_+^{2n+2}and then write the constraint as \boldsymbol{A}\mathbf{z}=\boldsymbol{b} with \boldsymbol{b}=\boldsymbol{y} and \boldsymbol{A}=\big[\boldsymbol{1}_n;-\boldsymbol{1}_n;\mathbb{I}_n;-\mathbb{I}_n\big]And for the objective function\boldsymbol{c}=\big(\boldsymbol{0},\boldsymbol{1}_n,-\boldsymbol{1}_n\big)^\top\in\mathbb{R}_+^{2n+2}

To illustrate, consider a sample from a lognormal distribution,

n = 101 
set.seed(1)
y = rlnorm(n)
median(y)
[1] 1.077415

For the optimization problem, use the matrix form, with 3n constraints, and 2n+1 parameters,

library(lpSolve) 
X = rep(1,n) 
A = cbind(X, -X, diag(n), -diag(n))
b = y
c = c(rep(0,2), rep(1,n),rep(1,n))
equal_type = rep("=", n) 
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 1.077415

It looks like it’s working well…

Quantile

Of course, we can adapt our previous code for quantiles

tau = .3
quantile(y,tau)
      30% 
0.6741586

The linear program is now\min_{q^+,q^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i,q^+,q^-\geq 0 and y_i=q^+-q^-+a_i-b_i, \forall i=1,\cdots,n. The R code is now

c = c(rep(0,2), tau*rep(1,n),(1-tau)*rep(1,n))
r = lp("min", c,A,equal_type,b)
head(r$solution,1)
[1] 0.6741586

So far so good…

Quantile Regression

Consider the following dataset, with rents of flat, in a major German city, as function of the surface, the year of construction, etc.

base=read.table("http://freakonometrics.free.fr/rent98_00.txt",header=TRUE)

The linear program for the quantile regression is now\min_{\boldsymbol{\beta}^+,\boldsymbol{\beta}^-,\mathbf{a},\mathbf{b}}\left\lbrace\sum_{i=1}^n\tau a_i+(1-\tau)b_i\right\rbracewith a_i,b_i\geq 0 and y_i=\boldsymbol{x}^\top[\boldsymbol{\beta}^+-\boldsymbol{\beta}^-]+a_i-b_i\forall i=1,\cdots,n and \beta_j^+,\beta_j^-\geq 0 \forall j=0,\cdots,k. So use here

require(lpSolve) 
tau = .3
n=nrow(base)
X = cbind( 1, base$area)
y = base$rent_euro
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] 148.946864   3.289674

Of course, we can use R function to fit that model

library(quantreg)
rq(rent_euro~area, tau=tau, data=base)
Coefficients:
(Intercept)        area 
 148.946864    3.289674

Here again, it seems to work quite well. We can use a different probability level, of course, and get a plot

plot(base$area,base$rent_euro,xlab=expression(paste("surface (",m^2,")")),
     ylab="rent (euros/month)",col=rgb(0,0,1,.4),cex=.5)
sf=0:250
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")
tau = .9
r = lp("min",c,A,const_type,b)
tail(r$solution,2)
[1] 121.815505   7.865536
yr=r$solution[2*n+1]+r$solution[2*n+2]*sf
lines(sf,yr,lwd=2,col="blue")

And we can adapt the later to multiple regressions, of course,

X = cbind(1,base$area,base$yearc)
K = ncol(X)
N = nrow(X)
A = cbind(X,-X,diag(N),-diag(N))
c = c(rep(0,2*ncol(X)),tau*rep(1,N),(1-tau)*rep(1,N))
b = base$rent_euro
const_type = rep("=",N)
r = lp("min",c,A,const_type,b)
beta = r$sol[1:K] -  r$sol[(1:K+K)]
beta
[1] -5542.503252     3.978135     2.887234

to be compared with

library(quantreg)
rq(rent_euro~ area + yearc, tau=tau, data=base)
 
Coefficients:
 (Intercept)         area        yearc 
-5542.503252     3.978135     2.887234 
 
Degrees of freedom: 4571 total; 4568 residual

De la pratique de la régression

Depuis le début de la session, j’ai imposé une petite innovation, en donnant, environ une semaine sur deux, un petit exercice (obligatoire mais non noté) avant le cours, en vue de forcer à réfléchir (et de donner des éléments de réponse). Par exemple pour le premier cours, il fallait “prévoir” une valeur manquante, et le but était de montrer que, naturellement, on choisit la valeur moyenne.

Pour demain, j’avais posé un exercice un peu plus compliqué, sachant qu’on avait vu, lors du dernier cours, comme faire une régression linéaire, et qu’on avait fini en discutant les tests simples (en lien avec la significativité) et les tests multiples. Pour l’exercice, j’avais mis en ligne une petite base de données,

download.file("http://freakonometrics.free.fr/data3.RData","data3.RData")
load("data3.RData")
str(df)
'data.frame':	147 obs. of  10 variables:
 $ Y : num  11.72 15.91 14.19 11.15 8.31 ...
 $ X1: num  1.33 3.18 0.28 2.08 0.11 1.67 1.97 1.27 4.38 0.52 ...
 $ X2: num  3.66 3.75 3.32 2.68 4.97 2.98 4.56 1.78 2.83 6.36 ...
 $ X3: num  1.41 3.01 0.34 2.19 0.25 1.69 2.01 1.25 4.41 0.43 ...
 $ X4: num  -3.53 -4.46 -3.35 -7.54 -7.02 -2.53 -6.1 -5.99 -3.92 -5.84 ...
 $ X5: num  0.57 0.01 -0.7 1.62 -0.95 -1.37 1.18 -0.72 2.63 -1.63 ...
 $ X6: num  -0.82 1.2 3.03 -0.91 -1.6 1.77 1 -1.33 1.31 -0.7 ...
 $ X7: num  1.01 0.06 2.02 3.63 2.66 2.53 1.29 3.5 1.17 1.8 ...
 $ X8: num  8.31 8.52 9.78 7.34 7.26 ...
 $ X9: num  6.04 6.53 7.52 5.61 4.52 6.06 6.2 5.99 6.93 4.38 ...

Je vais mettre ici les questions que je posais, et donner des pistes de réflexions, non pas sur les réponses attendues (je n’attends rien de cet exercice à part une réflexion), mais sur la discussion que peut amener chacune des questions,

  • Faîtes un modèle linéaire pour expliquer Y en fonction des neuf variables explicatives. Combien de variables explicatives garderiez-vous ?

Commençons par faire une régression sur toutes les variables

summary(lm(Y~., data=df))
 
Coefficients:
             Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  8.046133   2.193448   3.668 0.000349 ***
X1           0.342293   0.915894   0.374 0.709186    
X2          -0.040479   0.103409  -0.391 0.696073    
X3           1.683875   0.897278   1.877 0.062693 .  
X4          -0.009254   0.062382  -0.148 0.882295    
X5          -1.085367   0.113840  -9.534  &lt; 2e-16 ***
X6           0.983207   0.111830   8.792 5.49e-15 ***
X7          -0.015646   0.087483  -0.179 0.858327    
X8           0.012165   0.094756   0.128 0.898033    
X9           0.172210   0.180605   0.954 0.342005    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.019 on 137 degrees of freedom
Multiple R-squared:  0.9228,	Adjusted R-squared:  0.9178 
F-statistic:   182 on 9 and 137 DF,  p-value: &lt; 2.2e-16

On a, dans la sortie, une dizaine de tests de Student qui sont évoqués, correspondant au test de l’hypothèse H_0:\beta_j=0 (contre l’hypothèse alternative (bilatérale) H_1:\beta_j\neq 0), dans un modèle de la forme y_i=\beta_0+\beta_1x_{1,i}+\dots+\beta_9x_{9,i}+\varepsilon_iC’est ce qu’on appelle le test de significativité de la variable x_j (oui, comme on l’a vu en cours, on peut dire le test parce que les autres tests classiques – Fisher, ou Wald – sont équivalent – sauf qu’au lieu de regarder t on regarde la statistique t^2 – ce qui présente l’avantage de voir la statistique de test comme une forme de distance à l’hypothèse H_0 : si c’est trop grand, on rejette…). Avec un seuil d’acceptation de l’ordre de 5%, on nous dit que 6 variables ne sont pas significatives. Mais gardons bien en mémoire que le test de significativité de x_j est fait ici en supposant que toutes les autres variables restent dans le modèle. Autre chose: avec un seuil d’acceptation de l’ordre de 10%, une des variables (la troisième) peut être vue comme significative.

Faisons un test multiple, pour savoir si on peut supprimer 6 des 9 variables explicatives possibles (faisons le à la main, inutile d’aller chercher un package pour le faire)

reg1 = lm(formula = Y ~ ., data = df)
reg0 = lm(formula = Y ~ X3+X5+X6, data = df)
summary(reg0)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  9.11031    0.16788   54.27   &lt;2e-16 ***
X3           1.96784    0.07604   25.88   &lt;2e-16 ***
X5          -0.99296    0.08751  -11.35   &lt;2e-16 ***
X6           1.08761    0.05934   18.33   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.011 on 143 degrees of freedom
Multiple R-squared:  0.9206,	Adjusted R-squared:  0.919 
F-statistic:   553 on 3 and 143 DF,  p-value: &lt; 2.2e-16 anova(reg0,reg1) Analysis of Variance Table Model 1: Y ~ X3 + X5 + X6 Model 2: Y ~ X1 + X2 + X3 + X4 + X5 + X6 + X7 + X8 + X9 Res.Df RSS Df Sum of Sq F Pr(&gt;F)
1    143 146.16                           
2    137 142.12  6    4.0338 0.6481 0.6916

Le test de Fisher nous dit qu’on peut accepter l’hypothèse que les 6 coefficients sont nuls – ici H_0:\beta_1=\beta_2=\beta_4=\beta_7=\beta_8=\beta_9=0(qui est un test multiple, contrairement au test de Student précédant). Mais il nous dit aussi, qu’individuellement, les trois variables restants semblent significatives. Donc j’aurais tendance à garder 3 variables explicatives (je ne parle pas de la constante : on garde toujours la constante – qui n’explique pas grand chose, sauf la valeur moyenne de y).

  • faites une prévision pour un individu dont on sait que X3=1, X5=1 et X6=8

(en réalité, la vraie question que j’ai posée contenait une typo ce qui la rendait vicieuse parce que ce n’est pas le modèle qu’on vient de construire… mais pour commencer, regardons cette question)

On vient de calibrer ce modèle, donc il suffit de faire une prévision,

predict(reg0, newdata=data.frame(X3=1, X5=1, X6=8))
       1 
18.78603

Mais ce n’était pas la vraie question…

  • faites une prévision pour un individu dont on sait que X1=3, X5=1 et X6=8

Je pense que pour répondre à cette question, il convient d’oublier tout ce qu’on vient de voir. On nous donne trois informations, et on va voir si on peut les exploiter. Autrement dit, on va commencer par regarder la régression sur ces trois variables,

reg156 = lm(formula = Y ~ X1+X5+X6, data = df)
summary(reg156)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  9.06688    0.17222   52.65   &lt;2e-16 ***
X1           1.98342    0.07800   25.43   &lt;2e-16 ***
X5          -1.01438    0.08947  -11.34   &lt;2e-16 ***
X6           1.07824    0.06039   17.86   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.026 on 143 degrees of freedom
Multiple R-squared:  0.9183,	Adjusted R-squared:  0.9166 
F-statistic: 535.8 on 3 and 143 DF,  p-value: &lt; 2.2e-16

Le modèle est ici bon, les trois variables étant significatives. Cela dit, je dis qu’il est “bon” mais on verra vendredi comment discuter davantage ce point… En tous cas, on peut tenter de faire une prévision, et on obtient

predict(reg156, newdata=data.frame(X1=3, X5=1, X6=8))
       1 
22.62867
  • faites une prévision pour un individu dont on sait que X1=3, X3=2, X5=1 et X6=8

Comme auparavant, on nous donne 4 informations… on va regarder le modèle avec les 4 variables

reg1356 = lm(formula = Y ~ X1+X3+X5+X6, data = df)
summary(reg1356)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  9.10458    0.17131  53.148   &lt;2e-16 ***
X1           0.16362    0.89028   0.184    0.854    
X3           1.80663    0.88052   2.052    0.042 *  
X5          -0.99558    0.08895 -11.192   &lt;2e-16 ***
X6           1.08649    0.05986  18.152   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.014 on 142 degrees of freedom
Multiple R-squared:  0.9207,	Adjusted R-squared:  0.9184 
F-statistic: 411.9 on 4 and 142 DF,  p-value: &lt; 2.2e-16

Cette fois, une des variables n’est pas significative (la première) donc on devrait l’enlever: on fait alors la régression juste sur les trois autres variables

reg356 = lm(formula = Y ~ X3+X5+X6, data = df)
summary(reg356)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  9.11031    0.16788   54.27   &lt;2e-16 ***
X3           1.96784    0.07604   25.88   &lt;2e-16 ***
X5          -0.99296    0.08751  -11.35   &lt;2e-16 ***
X6           1.08761    0.05934   18.33   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.011 on 143 degrees of freedom
Multiple R-squared:  0.9206,	Adjusted R-squared:  0.919 
F-statistic:   553 on 3 and 143 DF,  p-value: &lt; 2.2e-16

Cette fois, le modèle est “bon” et on peut alors faire la prévision

predict(reg356, newdata=data.frame(X1=3, X3=2, X5=1, X6=8))
       1 
20.75388

A titre d’information, si on avait fait la prévision sans enlever la variable non significative, on aurait obtenu la valeur suivante

predict(reg1356, newdata=data.frame(X1=3, X3=2, X5=1, X6=8))
       1 
20.90501

qui est globalement assez proche. On aura l’occasion d’en reparler en cours, quand on se demandera s’il est plus grave d’avoir une valeur non-significative dans la régression, ou d’oublier une variable importante… Mais encore une fois, le but de ces petits exercices est d’appliquer ce qu’on a vu en cours, et d’introduire des questions auxquelles j’apporterai des réponses au prochain cours !

Quelle responsabilité pour les algorithmes ?

Il y a quelques semaines, avec Rodolphe Bigot, maître de conférences à l’Université de Picardie Jules Verne, on avait commencé une réflexion sur le thème “Repenser la responsabilité, et la causalité“, et voici la suite…

Historiquement, les algorithmes se contentaient de fournir une aide à la décision, laissant à un être humain le rôle de prendre la décision, mais des expériences sont en cours, avec des systèmes autonomes, prenant des décisions, que ce soit les systèmes de conduite de voiture, ou les algorithmes de justice prédictive, comme le montre Huss et al. (2018). Cette autonomie, qui signifie fondamentalement la « faculté d’agir librement » désigne aussi l’idée « de se gouverner par ses propres lois ». Mais quelle est la responsabilité du décisionnaire dans le cas d’une prédiction qui entrainerait un préjudice ?

Continue reading Quelle responsabilité pour les algorithmes ?

Combiner les modalités d’une variable factorielle

Un billet rapide pour reprendre un point qu’on a vu ce matin en cours STT5100 pour illustrer le test de Fisher. On va utiliser les données de prix d’appartements en Pologne (données pas mal utilisées dans mon ébauche de notes de cours)

library(DALEX)
data(apartments)
with(data = apartments, boxplot(m2.price ~ district))

On a envie de faire ici des regroupements de modalités (c’est d’ailleurs suggéré par la régression simple, 5 variables explicatives étant ici non significatives). Pour mieux voir, on peut réordonner les modalités

A = with(data = apartments, aggregate(m2.price,by=list(district),FUN=mean))
A = A[order(A$x),]
L = as.character(A$Group.1)
apartments$district = factor(apartments$district, level=L)
with(data = apartments, boxplot(m2.price ~ district))

On va prendre ici le district le moins cher comme référence,

reg=lm(m2.price ~ district, data=apartments)
&gt; summary(reg)
 
Coefficients:
                    Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)          2968.36      58.02  51.160   &lt;2e-16 ***
districtBielany        17.38      84.16   0.207    0.836    
districtPraga          26.45      85.12   0.311    0.756    
districtUrsynow        42.01      82.65   0.508    0.611    
districtBemowo         80.10      83.71   0.957    0.339    
districtUrsus         102.01      82.25   1.240    0.215    
districtZoliborz      829.59      83.94   9.884   &lt;2e-16 ***
districtMokotow       887.10      81.86  10.837   &lt;2e-16 ***
districtOchota        987.93      84.16  11.738   &lt;2e-16 ***
districtSrodmiescie  2214.39      83.28  26.591   &lt;2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 597.4 on 990 degrees of freedom
Multiple R-squared:  0.5698,	Adjusted R-squared:  0.5659 
F-statistic: 145.7 on 9 and 990 DF,  p-value: &lt; 2.2e-16

On peut tester si les 5 premières modalités sont nulles, ce qui est un test multiple, et on va utiliser le test de Ficher :

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(&gt;F)
1    995 354051715                           
2    990 353269202  5    782513 0.4386 0.8217

La statistique de Fisher est faible, et avec une p-value de 82%. On peut tenter le diable, et rajouter encore une modalité

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0",
                        "districtZoliborz = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F    Pr(&gt;F)    
1    996 405455409                                  
2    990 353269202  6  52186207 24.374 &lt; 2.2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

Mais on a peut être été trop gourmand cette fois. On va regrouper les 6 premières modalités (et appeler A le regroupement de districts). Si on regarde les prix moyens, par districts, on obtient

levels(apartments$district) = c(rep("A",6),levels(apartments$district)[7:11])
with(data = apartments, boxplot(m2.price ~ district))

apartments$district = relevel(apartments$district,"Zoliborz")

On recommence, en mettant le district le moins cher comme référence, et on veut tester si les deux suivants ont des coefficients nuls dans la régression linéaire.

reg=lm(m2.price ~ district, data=apartments)
linearHypothesis(reg, c("districtMokotow = 0",
                        "districtOchota = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(&gt;F)
1    997 355292524                           
2    995 354051715  2   1240809 1.7435 0.1754

Avec une p-value de 17%, on peut accepter de regrouper les trois modalités ensemble. On a alors trois groupes de districts, dont les noms sont A, B et C. On obtient les boîtes à moustaches suivantes

levels(apartments$district) = c("B","A",rep("B",2),"C")
apartments$district = relevel(apartments$district,"A")
with(data = apartments, boxplot(m2.price ~ district))

Je laisse les plus courageux vérifier, mais on a trois districts effectivement différents, et si le but est de prévoir le prix des logements, inutiles d’utiliser un découpage avec 10 modalités, un découpage avec 3 suffit !

Le R² pour justifier la causalite…

Ce soir, Louis (@LouisdeCharson) me signalait un article assez délirant où, dans une entrevue, un monsieur souvent présenté comme un chercheur (je n’ai pas réussi à trouver où) disait une phrase assez étonnante,

“… une variable ancienne explique 85 % de la variation d’une variable nouvelle. On peut donc parler de causalité”

Histoire de plaisanter un peu, je l’ai pris au mot, et j’ai ressorti un vieux jeu de données que j’avais utilisé dans un précédant billet, avec le traffic cycliste à Helsinki, en fonction de la température. Pour rester dans la classe des modèles linéaires, j’ai enlevé quelques jours d’hiver, et j’ai tenté de régresser la température le jour j+1 (la nouvelle variable comme dit dans la discussion) sur le nombre de cyclistes la veille, autrement dit le jour j

Si on regarde la régression, on obtient

lm(formula = temp ~ cyclists, data = df0)
Coefficients:
Estimate Std. Error t value Pr(&gt;|t|)
(Intercept) -4.170e+00  4.052e-01  -10.29   &lt;2e-16 ***
cyclists     1.066e-03  3.558e-05   29.96   &lt;2e-16 ***
 
Residual standard error: 2.833 on 212 degrees of freedom
Multiple R-squared: 0.809, Adjusted R-squared: 0.8081

autrement dit, 81\% de la variation de la température (moyenne) le  jour j+1 est expliqué par le nombre de cyclistes le jour j, c’est la définition du R^2 (“the coefficient of determination, denoted R2, is the proportion of the variance in the dependent variable that is predictable from the independent variable“). Et le monsieur en déduit qu’il y a une relation causale, autrement dit, dans notre exemple, le nombre de cyclistes sur la route le jour j cause la température le jour j+1. Loin de moi l’idée de me poser comme un expert sur les modèles causaux et les modèles climatiques, je doute que ce soit le cas… sinon la solution pour stopper le réchauffement climatique est toute trouvée : il faut diminuer le nombre de cyclistes !

 

More on Random dollars for everyone !

Following my post of yesterday evening, Alex (@AlexSablay) suggested me to look at the Boltzman-Gibbs distribution (e.g. in Yakovenko & Rosser (2009)). There are indeed interesting ideas, and it looks it is more or less what we tried to do in our previous post

Again, I found that article hard to read, but at some point, it looks like they mention that the limiting distribution could be a discrete version that tends to the exponential distribution when the size of the population tends to infinity. Here we have 2000 people, so it should be possible to see it..

If we go for 100,000 rounds, the range of wealth is

so it is still hard to say about the upper bound… For the distribution of the wealth, at the end we obtain the following histogram

and the empirical cumulative distribution function is

Here the red line is the exponential distribution…

So, indeed, it seems that there is a limiting distribution, and it is the exponential one… And the good thing with stable distributions is that they are some sort of fixed point : if we start with that distribution, we should not move (too much) from is. For instance, if we start with an exponential distribution

x = rexp(n,1/init)
x = x*init/mean(round(x))
x = round(x)

the range of the wealth remains very stable

as well as the density (again, it is a (symmetric)-kernel based estimate, with a multiplicative bias in 0, and some negative values)

If we plot Lorenz curve, we can see that inequalities do not change here

In that case, it is well known that the Lorenz curve is u\mapsto u+(1-u)\log(1-u) and Gini coefficient is exactly 1/2.

Random dollars for everyone !

https://f.hypotheses.org/wp-content/blogs.dir/253/files/2020/01/counting_money.gif

During the week-end, Philippe Rivière made me discover an interesting problem,

Everyone in a room keeps giving dollars to random others.
You’ll never guess what happens next.f

It was coming from a post, a few years ago on decisionsciencenews.com… This problem was mentioned in recent post since it is related to an article published in the American Scientist in november 2019, Is Inequality Inevitable? (that was translated in French last week, for Pour la Science in a section wrongly entitled Economics since it is only a physicist vision of an (old) economic problem) – see also Brewster Kahle’s post.

(for those really interested in mathematics of inequalities, with a (mathematical) economic perspective, there are countless interesting articles…. see at least Thony Atkinson‘s book or several articles published in Econometrica – references are given in the slides of the course I gave a few years ago on that topic).

I wanted to try, on my own, because I did not understood most of the posts. Because my first thought is that the problem is ill-posed. First of all, what is this “giving dollars”? is it a fixed amount or a random one ? Let us start by assuming that it is fixed. Now, if you know a little bit about gambling and ruin, you guess that it’s very likelely that some one will get banckrupt (at least on a very very long range)… what should we do with that person? Actually, those points were clarified in Jordan’s post

“Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?”

A well-posed problem states that only people with money can give (everyone can receive) and the amount of money given is fixed.

  • A first model (with possible bankruptcy)

First of all, assume that everyone has a fixed amont of money, say 100 (as discussed above). And that each one must give 1 to someone, picked randomly, or more precisely

“every person gives a dollar to one randomly chosen other person”

So, the other people of person i means sampling in \{1,2,\cdots,n\}\backslash\{i\}

n = 2000
ns = 20000
init = 100
x = rep(init,n)
VX = x
VR = c(x[1],x[1])
for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(1:n)
dx = table(other)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)]=0
x = x -rep(1,n)+dx
VR=cbind(VR,range(x))
if(s %% 200 ==0) VX=cbind(VX,x)
}

Here, I store the range of the wealth of my 2000 people, and every 200 rounds, I also keep tracks of the wealths. The plot of the evolution of the range is the following,

As expected, some people will be ruined… and so far, I did nothing, they keep playing… An easy solution would have been to given them an initial endowment of 1000, and not 100. But that’s only a temporary solution: over 20,000 rounds, there might have no bankruptcy, but over 200,000 there will be ! Before moving to the reflected problem (where only people with money give a dollar), just look at the evolution of the distribution of wealths,

or the evolution of the cumulative distribution

We clearly have more variability as we play. Here, I cannot compute any inequality indices (Lorenz curve is constructed only for positive wealths for instance).

I did not look at analytical results here. The only thing that I know for sure is that about (if there are enough people sharing money) one third (actually 36.78\% i.e. e^{-1}) will give one dollar, and receive nothing… that’s the law of small numbers (that result was mentioned in Jordan’s post).

  • The reflected problem (with no bankruptcy)

Consider now the reflected problem

“Imagine a room full of people with the same amount of money. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?”

(I call that reflected because if someone hits the zero-barrier, it can only go up : that person gives nothing, and can possibly receive)

for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(which(x>0))
dx = table(other)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)] = 0
x = x -(x>0)*1+dx
VR = cbind(VR,range(x))
if(s %% 200 ==0) VX = cbind(VX,x)
}

Here the range is the following

We are bounded from below (it is not possible to have less than 0) and it seems that extremely reach people are less rich than before. We can look now at the cumulative distribution function (since there is no density here, because of the mass at 0)

(for to get some smooth function, I used a symmetric kernel estimate here, so numerically there are values below 0). Since wealths are positive, we can look at Lorenz curves

It seems that there are more and more inequality, as we play that reallocation game. But here again, I will have to run more simulations (and actually a lot more*) to see if there is a non-degenerated limit with such a game. Here, the distribution of wealth after n rounds is an homogenous Markov chain, taking values in \mathbb{N}_+, and using combinatorials, it should be possible to get the transition matrix…

* in did try (during the night) following the advise of Alex (@AlexSablay) advise, and indeed, there is a limiting distribution, see here

  • When the contribution is a fixed part (e.g. 1%) of the wealth

An important issue previously was about additivity : “every person with money gives a dollar“. Inequality measures do not like additive operations, they like multiplicate operations (see Serge Christophe Kohlm’s discussion, for instance), or using other words, changes should be relative, not absolute. What about the following question

“Imagine a room full of 100 people with the same amount of money. With every tick of the clock, every person gives a fixed percentage of his money to one randomly chosen other person. After some time progresses, how will the money be distributed?”

The code will be the following: as previously, we match givers and receivers, but here, we have to compute how people give (here it is 1/100 of the money, at each round). At the very first round, we are strictly equivalent to the previous versions : everyone gives 1. The only thing is that, at the second round, those who got nothing at the first one are required to give “only” 99¢.

frac = 1/100
for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(1:n)
df = data.frame(dep = 1:n, arr = other, mont = x*frac)
A = aggregate(df$mont,by=list(df$arr),FUN=sum)
dx = A$x
names(dx) = as.character(A$Group.1)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)] = 0
x = x*(1-frac)+dx
VR = cbind(VR,range(x))
if(s %% 200 ==0) VX = cbind(VX,x)
}

Here is looks like we have some sort of convergence… at least, no one gets less than 75, and more than 125… The distribution can be visualized below

or via the cumulative distribution function

But to be honest, I don’t know what that distribution is…

To conclude, we can also try something (slightly) different : what if we start with non identical wealths ? Instead of having everyone with wealth 100$, what if it was uniformely distributed between 0$ and 200$ ?

x = seq(0,2*init,length=n)

It looks like we have a convergence towards the same distribution, with clearly less inequality than when we started… Here is the cumulative distribution (that started with the uniform distribution)

Again, if someone know what that limiting distribution is, I’d be glad to know !

On Cochran Theorem (and Orthogonal Projections)

Cochran Theorem – from The distribution of quadratic forms in a normal system, with applications to the analysis of covariance published in 1934 – is probably the most import one in a regression course. It is an application of a nice result on quadratic forms of Gaussian vectors. More precisely, we can prove that if \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d) is a random vector with d \mathcal{N}(0,1) variable then (i) if A is a (squared) idempotent matrix \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r where r is the rank of matrix A, and (ii) conversely, if \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r then A is an idempotent matrix of rank r. And just in case, A is an idempotent matrix means that A^2=A, and a lot of results can be derived (for instance on the eigenvalues). The prof of that result (at least the (i) part) is nice: we diagonlize matrix A, so that A=P\Delta P^\top, with P orthonormal. Since A is an idempotent matrix observe thatA^2=P\Delta P^\top=P\Delta P^\top=P\Delta^2 P^\topwhere \Delta is some diagonal matrix such that \Delta^2=\Delta, so terms on the diagonal of \Delta are either 0 or 1‘s. And because the rank of A (and \Delta) is r then there should be r 1‘s and d-r 1‘s. Now write\boldsymbol{Y}^\top A\boldsymbol{Y}=\boldsymbol{Y}^\top P\Delta P^\top\boldsymbol{Y}=\boldsymbol{Z}^\top \Delta\boldsymbol{Z}where \boldsymbol{Z}=P^\top\boldsymbol{Y} that satisfies\boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},PP^\top) i.e. \boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d). Thus \boldsymbol{Z}^\top \Delta\boldsymbol{Z}=\sum_{i:\Delta_{i,i}-1}Z_i^2\sim\chi^2_rNice, isn’t it. And there is more (that will be strongly connected actually to Cochran theorem). Let A=A_1+\dots+A_k, then the two following statements are equivalent (i) A is idempotent and \text{rank}(A)=\text{rank}(A_1)+\dots+\text{rank}(A_k) (ii) A_i‘s are idempotents, A_iA_j=0 for all i\neq j.

Now, let us talk about projections. Let \boldsymbol{y} be a vector in \mathbb{R}^n. Its projection on the space \mathcal V(\boldsymbol{v}_1,\dots,\boldsymbol{v}_p) (generated by those p vectors) is the vector \hat{\boldsymbol{y}}=\boldsymbol{V} \hat{\boldsymbol{a}} that minimizes \|\boldsymbol{y} -\boldsymbol{V} \boldsymbol{a}\| (in \boldsymbol{a}). The solution is\hat{\boldsymbol{a}}=( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top \boldsymbol{y} \text{ and } \hat{\boldsymbol{y}} = \boldsymbol{V} \hat{\boldsymbol{a}}
Matrix P=\boldsymbol{V} ( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top is the orthogonal projection on \{\boldsymbol{v}_1,\dots,\boldsymbol{v}_p\} and \hat{\boldsymbol{y}} = P\boldsymbol{y}.

Now we can recall Cochran theorem. Let \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{\mu},\sigma^2\mathbb{I}_d) for some \sigma>0 and \boldsymbol{\mu}. Consider sub-vector orthogonal spaces F_1,\dots,F_m, with dimension d_i. Let P_{F_i} be the orthogonal projection matrix on F_i, then (i) vectors P_{F_1}\boldsymbol{X},\dots,P_{F_m}\boldsymbol{X} are independent, with respective distribution \mathcal{N}(P_{F_i}\boldsymbol{\mu},\sigma^2\mathbb{I}_{d_i}) and (ii) random variables \|P_{F_i}(\boldsymbol{X}-\boldsymbol{\mu})\|^2/\sigma^2 are independent and \chi^2_{d_i} distributed.

We can try to visualize those results. For instance, the orthogonal projection of a random vector has a Gaussian distribution. Consider a two-dimensional Gaussian vector

library(mnormt)
r = .7
s1 = 1
s2 = 1
Sig = matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
Sig
Y = rmnorm(n = 1000,mean=c(0,0),varcov = Sig)
plot(Y,cex=.6)
vu = seq(-4,4,length=101)
vz = outer(vu,vu,function (x,y) dmnorm(cbind(x,y),
mean=c(0,0), varcov = Sig))
contour(vu,vu,vz,add=TRUE,col='blue')
abline(a=0,b=2,col="red")

Consider now the projection of points \boldsymbol{y}=(y_1,y_2) on the straight linear with directional vector \overrightarrow{\boldsymbol{u}} with slope a (say a=2). To get the projected point \boldsymbol{x}=(x_1,x_2) recall that x_2=ay_1 and \overrightarrow{\boldsymbol{x},\boldsymbol{y}}\perp\overrightarrow{\boldsymbol{u}}. Hence, the following code will give us the orthogonal projections

p = function(a){
x0=(Y[,1]+a*Y[,2])/(1+a^2)
y0=a*x0
cbind(x0,y0)
}

with

P = p(2)
for(i in 1:20) segments(Y[i,1],Y[i,2],P[i,1],P[i,2],lwd=4,col="red")
points(P[,1],P[,2],col="red",cex=.7)

Now, if we look at the distribution of points on that line, we get… a Gaussian distribution, as expected,

z = sqrt(P[,1]^2+P[,2]^2)*c(-1,+1)[(P[,1]>0)*1+1]
vu = seq(-6,6,length=601)
vv = dnorm(vu,mean(z),sd(z))
hist(z,probability = TRUE,breaks = seq(-4,4,by=.25))
lines(vu,vv,col="red")

Or course, we can use the matrix representation to get the projection on \overrightarrow{\boldsymbol{u}}, or a normalized version of that vector actually

a=2
U = c(1,a)/sqrt(a^2+1)
U
[1] 0.4472136 0.8944272
matP = U %*% solve(t(U) %*% U) %*% t(U)
matP %*% Y[1,]
[,1]
[1,] -0.1120555
[2,] -0.2241110
P[1,]
x0 y0
-0.1120555 -0.2241110

(which is consistent with our manual computation). Now, in Cochran theorem, we start with independent random variables,

Y = rmnorm(n = 1000,mean=c(0,0),varcov = diag(c(1,1)))

Then we consider the projection on \overrightarrow{\boldsymbol{u}} and \overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{u}}^\perp

U = c(1,a)/sqrt(a^2+1)
matP1 = U %*% solve(t(U) %*% U) %*% t(U)
P1 = Y %*% matP1
z1 = sqrt(P1[,1]^2+P1[,2]^2)*c(-1,+1)[(P1[,1]>0)*1+1]
V = c(a,-1)/sqrt(a^2+1)
matP2 = V %*% solve(t(V) %*% V) %*% t(V)
P2 = Y %*% matP2
z2 = sqrt(P2[,1]^2+P2[,2]^2)*c(-1,+1)[(P2[,1]>0)*1+1]

We can plot those two projections

plot(z1,z2)

and observe that the two are indeed, independent Gaussian variables. And (of course) there squared norms are \chi^2_{1} distributed.

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))

Pareto models for risk management

Our paper, with Emmanuel Flachaire, “Pareto models for risk management” is now online…

The Pareto model is very popular in risk management, since simple analytical formulas can be derived for financial downside risk measures (Value-at-Risk, Expected Shortfall) or reinsurance premiums and related quantities (Large Claim Index, Return Period). Nevertheless, in practice, distributions are (strictly) Pareto only in the tails, above (possible very) large threshold. Therefore, it could be interesting to take into account second order behavior to provide a better fit. In this article, we present how to go from a strict Pareto model to Pareto-type distributions. We discuss inference, and derive formulas for various measures and indices, and finally provide applications on insurance losses and financial risks.