Combiner les modalités d’une variable factorielle

Un billet rapide pour reprendre un point qu’on a vu ce matin en cours STT5100 pour illustrer le test de Fisher. On va utiliser les données de prix d’appartements en Pologne (données pas mal utilisées dans mon ébauche de notes de cours)

library(DALEX)
data(apartments)
with(data = apartments, boxplot(m2.price ~ district))

On a envie de faire ici des regroupements de modalités (c’est d’ailleurs suggéré par la régression simple, 5 variables explicatives étant ici non significatives). Pour mieux voir, on peut réordonner les modalités

A = with(data = apartments, aggregate(m2.price,by=list(district),FUN=mean))
A = A[order(A$x),]
L = as.character(A$Group.1)
apartments$district = factor(apartments$district, level=L)
with(data = apartments, boxplot(m2.price ~ district))

On va prendre ici le district le moins cher comme référence,

reg=lm(m2.price ~ district, data=apartments)
> summary(reg)
 
Coefficients:
                    Estimate Std. Error t value Pr(>|t|)    
(Intercept)          2968.36      58.02  51.160   <2e-16 ***
districtBielany        17.38      84.16   0.207    0.836    
districtPraga          26.45      85.12   0.311    0.756    
districtUrsynow        42.01      82.65   0.508    0.611    
districtBemowo         80.10      83.71   0.957    0.339    
districtUrsus         102.01      82.25   1.240    0.215    
districtZoliborz      829.59      83.94   9.884   <2e-16 ***
districtMokotow       887.10      81.86  10.837   <2e-16 ***
districtOchota        987.93      84.16  11.738   <2e-16 ***
districtSrodmiescie  2214.39      83.28  26.591   <2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 597.4 on 990 degrees of freedom
Multiple R-squared:  0.5698,	Adjusted R-squared:  0.5659 
F-statistic: 145.7 on 9 and 990 DF,  p-value: < 2.2e-16

On peut tester si les 5 premières modalités sont nulles, ce qui est un test multiple, et on va utiliser le test de Ficher :

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(>F)
1    995 354051715                           
2    990 353269202  5    782513 0.4386 0.8217

La statistique de Fisher est faible, et avec une p-value de 82%. On peut tenter le diable, et rajouter encore une modalité

library(car)
linearHypothesis(reg, c("districtBielany = 0", 
                        "districtPraga = 0",
                        "districtUrsynow = 0",
                        "districtBemowo = 0",
                        "districtUrsus = 0",
                        "districtZoliborz = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F    Pr(>F)    
1    996 405455409                                  
2    990 353269202  6  52186207 24.374 < 2.2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

Mais on a peut être été trop gourmand cette fois. On va regrouper les 6 premières modalités (et appeler A le regroupement de districts). Si on regarde les prix moyens, par districts, on obtient

levels(apartments$district) = c(rep("A",6),levels(apartments$district)[7:11])
with(data = apartments, boxplot(m2.price ~ district))

apartments$district = relevel(apartments$district,"Zoliborz")

On recommence, en mettant le district le moins cher comme référence, et on veut tester si les deux suivants ont des coefficients nuls dans la régression linéaire.

reg=lm(m2.price ~ district, data=apartments)
linearHypothesis(reg, c("districtMokotow = 0",
                        "districtOchota = 0"))
Linear hypothesis test
 
Model 1: restricted model
Model 2: m2.price ~ district
 
  Res.Df       RSS Df Sum of Sq      F Pr(>F)
1    997 355292524                           
2    995 354051715  2   1240809 1.7435 0.1754

Avec une p-value de 17%, on peut accepter de regrouper les trois modalités ensemble. On a alors trois groupes de districts, dont les noms sont A, B et C. On obtient les boîtes à moustaches suivantes

levels(apartments$district) = c("B","A",rep("B",2),"C")
apartments$district = relevel(apartments$district,"A")
with(data = apartments, boxplot(m2.price ~ district))

Je laisse les plus courageux vérifier, mais on a trois districts effectivement différents, et si le but est de prévoir le prix des logements, inutiles d’utiliser un découpage avec 10 modalités, un découpage avec 3 suffit !

Le R² pour justifier la causalite…

Ce soir, Louis (@LouisdeCharson) me signalait un article assez délirant où, dans une entrevue, un monsieur souvent présenté comme un chercheur (je n’ai pas réussi à trouver où) disait une phrase assez étonnante,

“… une variable ancienne explique 85 % de la variation d’une variable nouvelle. On peut donc parler de causalité”

Histoire de plaisanter un peu, je l’ai pris au mot, et j’ai ressorti un vieux jeu de données que j’avais utilisé dans un précédant billet, avec le traffic cycliste à Helsinki, en fonction de la température. Pour rester dans la classe des modèles linéaires, j’ai enlevé quelques jours d’hiver, et j’ai tenté de régresser la température le jour j+1 (la nouvelle variable comme dit dans la discussion) sur le nombre de cyclistes la veille, autrement dit le jour j

Si on regarde la régression, on obtient

lm(formula = temp ~ cyclists, data = df0)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -4.170e+00  4.052e-01  -10.29   <2e-16 ***
cyclists     1.066e-03  3.558e-05   29.96   <2e-16 ***
 
Residual standard error: 2.833 on 212 degrees of freedom
Multiple R-squared: 0.809, Adjusted R-squared: 0.8081

autrement dit, 81\% de la variation de la température (moyenne) le  jour j+1 est expliqué par le nombre de cyclistes le jour j, c’est la définition du R^2 (“the coefficient of determination, denoted R2, is the proportion of the variance in the dependent variable that is predictable from the independent variable“). Et le monsieur en déduit qu’il y a une relation causale, autrement dit, dans notre exemple, le nombre de cyclistes sur la route le jour j cause la température le jour j+1. Loin de moi l’idée de me poser comme un expert sur les modèles causaux et les modèles climatiques, je doute que ce soit le cas… sinon la solution pour stopper le réchauffement climatique est toute trouvée : il faut diminuer le nombre de cyclistes !

 

More on Random dollars for everyone !

Following my post of yesterday evening, Alex (@AlexSablay) suggested me to look at the Boltzman-Gibbs distribution (e.g. in Yakovenko & Rosser (2009)). There are indeed interesting ideas, and it looks it is more or less what we tried to do in our previous post

Again, I found that article hard to read, but at some point, it looks like they mention that the limiting distribution could be a discrete version that tends to the exponential distribution when the size of the population tends to infinity. Here we have 2000 people, so it should be possible to see it..

If we go for 100,000 rounds, the range of wealth is

so it is still hard to say about the upper bound… For the distribution of the wealth, at the end we obtain the following histogram

and the empirical cumulative distribution function is

Here the red line is the exponential distribution…

So, indeed, it seems that there is a limiting distribution, and it is the exponential one… And the good thing with stable distributions is that they are some sort of fixed point : if we start with that distribution, we should not move (too much) from is. For instance, if we start with an exponential distribution

x = rexp(n,1/init)
x = x*init/mean(round(x))
x = round(x)

the range of the wealth remains very stable

as well as the density (again, it is a (symmetric)-kernel based estimate, with a multiplicative bias in 0, and some negative values)

If we plot Lorenz curve, we can see that inequalities do not change here

In that case, it is well known that the Lorenz curve is u\mapsto u+(1-u)\log(1-u) and Gini coefficient is exactly 1/2.

Random dollars for everyone !

https://freakonometrics.hypotheses.org/files/2020/01/counting_money.gif

During the week-end, Philippe Rivière made me discover an interesting problem,

Everyone in a room keeps giving dollars to random others.
You’ll never guess what happens next.f

It was coming from a post, a few years ago on decisionsciencenews.com… This problem was mentioned in recent post since it is related to an article published in the American Scientist in november 2019, Is Inequality Inevitable? (that was translated in French last week, for Pour la Science in a section wrongly entitled Economics since it is only a physicist vision of an (old) economic problem) – see also Brewster Kahle’s post.

(for those really interested in mathematics of inequalities, with a (mathematical) economic perspective, there are countless interesting articles…. see at least Thony Atkinson‘s book or several articles published in Econometrica – references are given in the slides of the course I gave a few years ago on that topic).

I wanted to try, on my own, because I did not understood most of the posts. Because my first thought is that the problem is ill-posed. First of all, what is this “giving dollars”? is it a fixed amount or a random one ? Let us start by assuming that it is fixed. Now, if you know a little bit about gambling and ruin, you guess that it’s very likelely that some one will get banckrupt (at least on a very very long range)… what should we do with that person? Actually, those points were clarified in Jordan’s post

“Imagine a room full of 100 people with 100 dollars each. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?”

A well-posed problem states that only people with money can give (everyone can receive) and the amount of money given is fixed.

  • A first model (with possible bankruptcy)

First of all, assume that everyone has a fixed amont of money, say 100 (as discussed above). And that each one must give 1 to someone, picked randomly, or more precisely

“every person gives a dollar to one randomly chosen other person”

So, the other people of person i means sampling in \{1,2,\cdots,n\}\backslash\{i\}

n = 2000
ns = 20000
init = 100
x = rep(init,n)
VX = x
VR = c(x[1],x[1])
for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(1:n)
dx = table(other)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)]=0
x = x -rep(1,n)+dx
VR=cbind(VR,range(x))
if(s %% 200 ==0) VX=cbind(VX,x)
}

Here, I store the range of the wealth of my 2000 people, and every 200 rounds, I also keep tracks of the wealths. The plot of the evolution of the range is the following,

As expected, some people will be ruined… and so far, I did nothing, they keep playing… An easy solution would have been to given them an initial endowment of 1000, and not 100. But that’s only a temporary solution: over 20,000 rounds, there might have no bankruptcy, but over 200,000 there will be ! Before moving to the reflected problem (where only people with money give a dollar), just look at the evolution of the distribution of wealths,

or the evolution of the cumulative distribution

We clearly have more variability as we play. Here, I cannot compute any inequality indices (Lorenz curve is constructed only for positive wealths for instance).

I did not look at analytical results here. The only thing that I know for sure is that about (if there are enough people sharing money) one third (actually 36.78\% i.e. e^{-1}) will give one dollar, and receive nothing… that’s the law of small numbers (that result was mentioned in Jordan’s post).

  • The reflected problem (with no bankruptcy)

Consider now the reflected problem

“Imagine a room full of people with the same amount of money. With every tick of the clock, every person with money gives a dollar to one randomly chosen other person. After some time progresses, how will the money be distributed?”

(I call that reflected because if someone hits the zero-barrier, it can only go up : that person gives nothing, and can possibly receive)

for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(which(x>0))
dx = table(other)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)] = 0
x = x -(x>0)*1+dx
VR = cbind(VR,range(x))
if(s %% 200 ==0) VX = cbind(VX,x)
}

Here the range is the following

We are bounded from below (it is not possible to have less than 0) and it seems that extremely reach people are less rich than before. We can look now at the cumulative distribution function (since there is no density here, because of the mass at 0)

(for to get some smooth function, I used a symmetric kernel estimate here, so numerically there are values below 0). Since wealths are positive, we can look at Lorenz curves

It seems that there are more and more inequality, as we play that reallocation game. But here again, I will have to run more simulations (and actually a lot more*) to see if there is a non-degenerated limit with such a game. Here, the distribution of wealth after n rounds is an homogenous Markov chain, taking values in \mathbb{N}_+, and using combinatorials, it should be possible to get the transition matrix…

* in did try (during the night) following the advise of Alex (@AlexSablay) advise, and indeed, there is a limiting distribution, see here

  • When the contribution is a fixed part (e.g. 1%) of the wealth

An important issue previously was about additivity : “every person with money gives a dollar“. Inequality measures do not like additive operations, they like multiplicate operations (see Serge Christophe Kohlm’s discussion, for instance), or using other words, changes should be relative, not absolute. What about the following question

“Imagine a room full of 100 people with the same amount of money. With every tick of the clock, every person gives a fixed percentage of his money to one randomly chosen other person. After some time progresses, how will the money be distributed?”

The code will be the following: as previously, we match givers and receivers, but here, we have to compute how people give (here it is 1/100 of the money, at each round). At the very first round, we are strictly equivalent to the previous versions : everyone gives 1. The only thing is that, at the second round, those who got nothing at the first one are required to give “only” 99¢.

frac = 1/100
for(s in 1:ns){
r = function(i) sample((1:n)[-i],size=1)
other = Vectorize(r)(1:n)
df = data.frame(dep = 1:n, arr = other, mont = x*frac)
A = aggregate(df$mont,by=list(df$arr),FUN=sum)
dx = A$x
names(dx) = as.character(A$Group.1)
dx = as.numeric(dx[as.character(1:n)])
dx[is.na(dx)] = 0
x = x*(1-frac)+dx
VR = cbind(VR,range(x))
if(s %% 200 ==0) VX = cbind(VX,x)
}

Here is looks like we have some sort of convergence… at least, no one gets less than 75, and more than 125… The distribution can be visualized below

or via the cumulative distribution function

But to be honest, I don’t know what that distribution is…

To conclude, we can also try something (slightly) different : what if we start with non identical wealths ? Instead of having everyone with wealth 100$, what if it was uniformely distributed between 0$ and 200$ ?

x = seq(0,2*init,length=n)

It looks like we have a convergence towards the same distribution, with clearly less inequality than when we started… Here is the cumulative distribution (that started with the uniform distribution)

Again, if someone know what that limiting distribution is, I’d be glad to know !

On Cochran Theorem (and Orthogonal Projections)

Cochran Theorem – from The distribution of quadratic forms in a normal system, with applications to the analysis of covariance published in 1934 – is probably the most import one in a regression course. It is an application of a nice result on quadratic forms of Gaussian vectors. More precisely, we can prove that if \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d) is a random vector with d \mathcal{N}(0,1) variable then (i) if A is a (squared) idempotent matrix \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r where r is the rank of matrix A, and (ii) conversely, if \boldsymbol{Y}^\top A\boldsymbol{Y}\sim\chi^2_r then A is an idempotent matrix of rank r. And just in case, A is an idempotent matrix means that A^2=A, and a lot of results can be derived (for instance on the eigenvalues). The prof of that result (at least the (i) part) is nice: we diagonlize matrix A, so that A=P\Delta P^\top, with P orthonormal. Since A is an idempotent matrix observe thatA^2=P\Delta P^\top=P\Delta P^\top=P\Delta^2 P^\topwhere \Delta is some diagonal matrix such that \Delta^2=\Delta, so terms on the diagonal of \Delta are either 0 or 1‘s. And because the rank of A (and \Delta) is r then there should be r 1‘s and d-r 1‘s. Now write\boldsymbol{Y}^\top A\boldsymbol{Y}=\boldsymbol{Y}^\top P\Delta P^\top\boldsymbol{Y}=\boldsymbol{Z}^\top \Delta\boldsymbol{Z}where \boldsymbol{Z}=P^\top\boldsymbol{Y} that satisfies\boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},PP^\top) i.e. \boldsymbol{Z}\sim\mathcal{N}(\boldsymbol{0},\mathbb{I}_d). Thus \boldsymbol{Z}^\top \Delta\boldsymbol{Z}=\sum_{i:\Delta_{i,i}-1}Z_i^2\sim\chi^2_rNice, isn’t it. And there is more (that will be strongly connected actually to Cochran theorem). Let A=A_1+\dots+A_k, then the two following statements are equivalent (i) A is idempotent and \text{rank}(A)=\text{rank}(A_1)+\dots+\text{rank}(A_k) (ii) A_i‘s are idempotents, A_iA_j=0 for all i\neq j.

Now, let us talk about projections. Let \boldsymbol{y} be a vector in \mathbb{R}^n. Its projection on the space \mathcal V(\boldsymbol{v}_1,\dots,\boldsymbol{v}_p) (generated by those p vectors) is the vector \hat{\boldsymbol{y}}=\boldsymbol{V} \hat{\boldsymbol{a}} that minimizes \|\boldsymbol{y} -\boldsymbol{V} \boldsymbol{a}\| (in \boldsymbol{a}). The solution is\hat{\boldsymbol{a}}=( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top \boldsymbol{y} \text{ and } \hat{\boldsymbol{y}} = \boldsymbol{V} \hat{\boldsymbol{a}}
Matrix P=\boldsymbol{V} ( \boldsymbol{V}^\top \boldsymbol{V})^{-1} \boldsymbol{V}^\top is the orthogonal projection on \{\boldsymbol{v}_1,\dots,\boldsymbol{v}_p\} and \hat{\boldsymbol{y}} = P\boldsymbol{y}.

Now we can recall Cochran theorem. Let \boldsymbol{Y}\sim\mathcal{N}(\boldsymbol{\mu},\sigma^2\mathbb{I}_d) for some \sigma>0 and \boldsymbol{\mu}. Consider sub-vector orthogonal spaces F_1,\dots,F_m, with dimension d_i. Let P_{F_i} be the orthogonal projection matrix on F_i, then (i) vectors P_{F_1}\boldsymbol{X},\dots,P_{F_m}\boldsymbol{X} are independent, with respective distribution \mathcal{N}(P_{F_i}\boldsymbol{\mu},\sigma^2\mathbb{I}_{d_i}) and (ii) random variables \|P_{F_i}(\boldsymbol{X}-\boldsymbol{\mu})\|^2/\sigma^2 are independent and \chi^2_{d_i} distributed.

We can try to visualize those results. For instance, the orthogonal projection of a random vector has a Gaussian distribution. Consider a two-dimensional Gaussian vector

library(mnormt)
r = .7
s1 = 1
s2 = 1
Sig = matrix(c(s1^2,r*s1*s2,r*s1*s2,s2^2),2,2)
Sig
Y = rmnorm(n = 1000,mean=c(0,0),varcov = Sig)
plot(Y,cex=.6)
vu = seq(-4,4,length=101)
vz = outer(vu,vu,function (x,y) dmnorm(cbind(x,y),
mean=c(0,0), varcov = Sig))
contour(vu,vu,vz,add=TRUE,col='blue')
abline(a=0,b=2,col="red")

Consider now the projection of points \boldsymbol{y}=(y_1,y_2) on the straight linear with directional vector \overrightarrow{\boldsymbol{u}} with slope a (say a=2). To get the projected point \boldsymbol{x}=(x_1,x_2) recall that x_2=ay_1 and \overrightarrow{\boldsymbol{x},\boldsymbol{y}}\perp\overrightarrow{\boldsymbol{u}}. Hence, the following code will give us the orthogonal projections

p = function(a){
x0=(Y[,1]+a*Y[,2])/(1+a^2)
y0=a*x0
cbind(x0,y0)
}

with

P = p(2)
for(i in 1:20) segments(Y[i,1],Y[i,2],P[i,1],P[i,2],lwd=4,col="red")
points(P[,1],P[,2],col="red",cex=.7)

Now, if we look at the distribution of points on that line, we get… a Gaussian distribution, as expected,

z = sqrt(P[,1]^2+P[,2]^2)*c(-1,+1)[(P[,1]>0)*1+1]
vu = seq(-6,6,length=601)
vv = dnorm(vu,mean(z),sd(z))
hist(z,probability = TRUE,breaks = seq(-4,4,by=.25))
lines(vu,vv,col="red")

Or course, we can use the matrix representation to get the projection on \overrightarrow{\boldsymbol{u}}, or a normalized version of that vector actually

a=2
U = c(1,a)/sqrt(a^2+1)
U
[1] 0.4472136 0.8944272
matP = U %*% solve(t(U) %*% U) %*% t(U)
matP %*% Y[1,]
[,1]
[1,] -0.1120555
[2,] -0.2241110
P[1,]
x0 y0
-0.1120555 -0.2241110

(which is consistent with our manual computation). Now, in Cochran theorem, we start with independent random variables,

Y = rmnorm(n = 1000,mean=c(0,0),varcov = diag(c(1,1)))

Then we consider the projection on \overrightarrow{\boldsymbol{u}} and \overrightarrow{\boldsymbol{v}}=\overrightarrow{\boldsymbol{u}}^\perp

U = c(1,a)/sqrt(a^2+1)
matP1 = U %*% solve(t(U) %*% U) %*% t(U)
P1 = Y %*% matP1
z1 = sqrt(P1[,1]^2+P1[,2]^2)*c(-1,+1)[(P1[,1]>0)*1+1]
V = c(a,-1)/sqrt(a^2+1)
matP2 = V %*% solve(t(V) %*% V) %*% t(V)
P2 = Y %*% matP2
z2 = sqrt(P2[,1]^2+P2[,2]^2)*c(-1,+1)[(P2[,1]>0)*1+1]

We can plot those two projections

plot(z1,z2)

and observe that the two are indeed, independent Gaussian variables. And (of course) there squared norms are \chi^2_{1} distributed.

On the conjugate function

In the MAT7381 course (graduate course on regression models), we will talk about optimization, and a classical tool is the so-called conjugate. Given a function f:\mathbb{R}^p\to\mathbb{R} its conjugate is function f^{\star}:\mathbb{R}^p\to\mathbb{R} such that f^{\star}(\boldsymbol{y})=\max_{\boldsymbol{x}}\lbrace\boldsymbol{x}^\top\boldsymbol{y}-f(\boldsymbol{x})\rbraceso, long story short, f^{\star}(\boldsymbol{y}) is the maximum gap between the linear function \boldsymbol{x}^\top\boldsymbol{y} and f(\boldsymbol{x}).

Just to visualize, consider a simple parabolic function (in dimension 1) f(x)=x^2/2, then f^{\star}(\color{blue}{2}) is the maximum gap between the line x\mapsto\color{blue}{2}x and function f(x).

x = seq(-100,100,length=6001)
f = function(x) x^2/2
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)

We can see it on the figure below.

viz = function(x0=1,YL=NA){
idx=which(abs(x)<=3) par(mfrow=c(1,2)) plot(x[idx],vf[idx],type="l",xlab="",ylab="",col="blue",lwd=2) abline(h=0,col="grey") abline(v=0,col="grey") idx2=which(x0*x>=vf)
polygon(c(x[idx2],rev(x[idx2])),c(vf[idx2],rev(x0*x[idx2])),col=rgb(0,1,0,.3),border=NA)
abline(a=0,b=x0,col="red")
i=which.max(x0*x-vf)
segments(x[i],x0*x[i],x[i],f(x[i]),lwd=3,col="red")
if(is.na(YL)) YL=range(vfstar[idx])
plot(x[idx],vfstar[idx],type="l",xlab="",ylab="",col="red",lwd=1,ylim=YL)
abline(h=0,col="grey")
abline(v=0,col="grey")
segments(x0,0,x0,fstar(x0),lwd=3,col="red")
points(x0,fstar(x0),pch=19,col="red")
}
viz(1)

or

viz(1.5)

In that case, we can actually compute f^{\star}, since f^{\star}(y)=\max_{x}\lbrace xy-f(x)\rbrace=\max_{x}\lbrace xy-x^2/2\rbraceThe first order condition is here x^{\star}=y and thusf^{\star}(y)=\max_{x}\lbrace xy-x^2/2\rbrace=\lbrace x^{\star}y-(x^{\star})^2/2\rbrace=\lbrace y^2-y^2/2\rbrace=y^2/2And actually, that can be related to two results. The first one is to observe that f(\boldsymbol{x})=\|\boldsymbol{x}\|_2^2/2 and in that case f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_2^2/2 from the following general result : if f(\boldsymbol{x})=\|\boldsymbol{x}\|_p^p/p with p>1, where \|\cdot\|_p denotes the standard \ell_p norm, then f^{\star}(\boldsymbol{y})=\|\boldsymbol{y}\|_q^q/q where\frac{1}{p}+\frac{1}{q}=1The second one is the conjugate of a quadratic function. More specifically if f(\boldsymbol{x})=\boldsymbol{x}^{\top}\boldsymbol{Q}\boldsymbol{x}/2 for some definite positive matrix \boldsymbol{Q}f^{\star}(\boldsymbol{y})=\boldsymbol{y}^{\top}\boldsymbol{Q}^{-1}\boldsymbol{y}/2. In our case, it was a univariate problem with \boldsymbol{Q}=1.

For the conjugate of the \ell_p norm, we can use the following code to visualize it

p = 3
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1.5)

or

p = 1.1
f = function(x) abs(x)^p/p
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1, YL=c(0,10))

Actually, in that case, we almost visualize that if f(x)=|x| then\displaystyle{f^{\star}\left(y\right)={\begin{cases}0,&\left|y\right|\leq 1\\\infty ,&\left|y\right|>1.\end{cases}}}

To conclude, another popular case, f(x)=\exp(x) then{\displaystyle f^{\star}\left(y\right)={\begin{cases}y\log(y)-y,&y>0\\0,&y=0\\\infty ,&y<0.\end{cases}}}We can visualize that case below

f = function(x) exp(x)
vf = Vectorize(f)(x)
fstar = function(y) max(y*x-vf)
vfstar = Vectorize(fstar)(x)
viz(1,YL=c(-3,3))