Additional thoughts about ‘Lorenz curves’ to compare models

A few month ago, I did mention a graph, of some so-called Lorenz curves to compare regression models, see e.g. Progressive’s slides (thanks Guillaume for the reference)

The idea is simple. Consider some model for the pure premium (in insurance, it is the quantity that we like to model), i.e. the conditional expected valeur

On some dataset, we have our predictions, as well as observed quantities, . The curve are obtained simply :

  • sort the observations so that

  • based on that ordering (from high risks to low risks, based on our predictions), we plot Lorenz curve

That looks nice, and as mentioned in my previous post, I have two issues regarding that graph. The first one is the ‘upper-bound’ : I do not understand this “perfect model”. The second one is the (pseudo) ‘lower-bound’, which is the “average model”. At least, for the later, I understand how to get it, but I do not understand what it means if we are (sometimes) below that line.

In order to understand a bit more, let us consider the following example.

  • we have some covariates 
  • given those covariates, we have parameters , with some random effect 
  • given those parameters, we have observations 

Let us generate such a dataset,

n = 10
s = 1
tau = 1
X = rnorm(n)
Theta = -1+X+rnorm(n)*tau
Y = exp(Theta+rnorm(n)*s)
base = data.frame(X,Theta,S,m)

and compare different strategies to get Lorenz curves :

  • the random case : observations are ordered randomly. That simply means that predictions are identical
   Ys = rev(base[sample(1:n),"Y"])
   L  = c(0,cumsum(Ys))/sum(Ys)
  • the partial-information model : we fit a model using our covariates, and we sort observations base on that model
   reg = lm(log(Y)~X,data=base)
   base$m = predict(reg)
   Ys = rev(base[order(base$m),"Y"])
   L  = c(0,cumsum(Ys))/sum(Ys)
  • a second partial-information model : assume that we know that the model should be a function increasing with the covariate. In that case, we can directly use it to sort our sample
   Ys = rev(base[order(base$X),"Y"])
   L  = c(0,cumsum(Ys))/sum(Ys)
  • the perfect model : here we observe . That’s clearly, here, the best we can do (we cannot observe the noise)
   Ys = rev(base[order(base$Theta),"Y"])
   L  = c(0,cumsum(Ys))/sum(Ys)

The empirical curve will be function of the number of observvations, , the variance of the random effect (that cannot be observed) , and the variance of the noise . In order to visualize what’s going on, let us generate several samples (say 1,000), and plot some confidence intervals (90% and 50%), as well as the average curve, obtained on those simulated samples. For instance, in the random case, use

L1=function(n=10,s=1,ns=1e3,tau=.2){
  CY=matrix(NA,ns,n+1)
  for(i in 1:ns){
    X=rnorm(n)
    Theta=-1+X+rnorm(n)/5
    S=exp(Theta+rnorm(n)*s)
    m=exp(Theta+.5*var(log(S)))
    base=data.frame(X,Theta,S,m)
    Y=rev(base[sample(1:n),"S"])
    CY[i,]=c(0,cumsum(Y))/sum(Y)
  }
  Qinf1=apply(CY,2,function(x) quantile(x,.05))
  Qsup1=apply(CY,2,function(x) quantile(x,.95))
  Qinf2=apply(CY,2,function(x) quantile(x,.25))
  Qsup2=apply(CY,2,function(x) quantile(x,.75))
  Qm=apply(CY,2,mean)
  return(data.frame(inf10=Qinf1,inf25=Qinf2,moy=Qm,sup75=Qsup2,sup90=Qsup1))
}

Then use

viz=function(B){
  n=nrow(B)-1
  u=(0:n)/n
  plot(0:1,0:1,col="white",xlab="",ylab="")
  polygon(c(u,rev(u)),c(B$inf10,rev(B$sup90)),col=rgb(0,0,1,.2),border=NA)
  polygon(c(u,rev(u)),c(B$inf25,rev(B$sup75)),col=rgb(0,0,1,.4),border=NA)
  lines(u,B$moy,col="blue",lwd=2)
  segments(0,0,1,1)}

e.g.

viz(L1(s=1,n=100,tau=1))

Which is what we were expecting: we are around the diagonal, and on average, we have the diagonal.

If we consider now the partial information model,

L2=function(n=10,s=1,ns=1e3,tau=.2){
  CY=matrix(NA,ns,n+1)
  for(i in 1:ns){
    X=rnorm(n)
    Theta=-1+X+rnorm(n)*tau
    S=exp(Theta+rnorm(n)*s)
    m=exp(Theta+.5*var(log(S)))
    base=data.frame(X,Theta,S,m)
    Y=rev(base[order(base$X),"S"])
    CY[i,]=c(0,cumsum(Y))/sum(Y)
  }
  Qinf1=apply(CY,2,function(x) quantile(x,.05))
  Qsup1=apply(CY,2,function(x) quantile(x,.95))
  Qinf2=apply(CY,2,function(x) quantile(x,.25))
  Qsup2=apply(CY,2,function(x) quantile(x,.75))
  Qm=apply(CY,2,mean)
  return(data.frame(inf10=Qinf1,inf25=Qinf2,moy=Qm,sup75=Qsup2,sup90=Qsup1))
}
 
viz(L2(s=1,n=100,tau=1))

We have here a curve above the first diagonal, as on the Figure mentioned in the introduction of this post. Note that if we use directly the covariate to sort, instead of  model, we have a rather similar graph,

L2b=function(n=10,s=1,ns=1e3,tau=.2){
  CY=matrix(NA,ns,n+1)
  for(i in 1:ns){
    X=rnorm(n)
    Theta=-1+X+rnorm(n)*tau
    S=exp(Theta+rnorm(n)*s)
    m=exp(Theta+.5*var(log(S)))
    base=data.frame(X,Theta,S,m)
    reg=lm(log(S)~X,data=base)
    base$m=predict(reg)
    Y=rev(base[order(base$m),"S"])
    CY[i,]=c(0,cumsum(Y))/sum(Y)
  }
  Qinf1=apply(CY,2,function(x) quantile(x,.05))
  Qsup1=apply(CY,2,function(x) quantile(x,.95))
  Qinf2=apply(CY,2,function(x) quantile(x,.25))
  Qsup2=apply(CY,2,function(x) quantile(x,.75))
  Qm=apply(CY,2,mean)
  return(data.frame(inf10=Qinf1,inf25=Qinf2,moy=Qm,sup75=Qsup2,sup90=Qsup1))
}
 
viz(L2b(s=1,n=100,tau=1))

And finally, our perfect model would be

L3=function(n=10,s=1,ns=1e3,tau=.2){
  CY=matrix(NA,ns,n+1)
  for(i in 1:ns){
    X=rnorm(n)
    Theta=-1+X+rnorm(n)*tau
    S=exp(Theta+rnorm(n)*s)
    m=exp(Theta+.5*var(log(S)))
    base=data.frame(X,Theta,S,m)
    Y=rev(base[order(base$m),"S"])
    CY[i,]=c(0,cumsum(Y))/sum(Y)
  }
  Qinf1=apply(CY,2,function(x) quantile(x,.05))
  Qsup1=apply(CY,2,function(x) quantile(x,.95))
  Qinf2=apply(CY,2,function(x) quantile(x,.25))
  Qsup2=apply(CY,2,function(x) quantile(x,.75))
  Qm=apply(CY,2,mean)
  return(data.frame(inf10=Qinf1,inf25=Qinf2,moy=Qm,sup75=Qsup2,sup90=Qsup1))
}
 
viz(L3(s=1,n=100,tau=1))

The curve is here in the upper left corner, again, as stated in the introductionary Figure.

But if we start playing with those functions, we can get weird results, e.g.

viz(L3(s=1,n=100,tau=1))

Here, we have a large variability for the noise . This might happen in insurance when dealing with liabilty claims, with large variance, and heavy tails. In that case, observe that the perfect model can even be below the diagonal. Which makes me more and more suspicious regarding the use of such graphical tools in insurance, to assess if a model a good, or not. Here, the perfect model might seem worst than a random one (where an identical value for ‘s is considered, even if the true values of ‘s are given).


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (November 28, 2015). Additional thoughts about ‘Lorenz curves’ to compare models. Freakonometrics. Retrieved September 11, 2024 from https://doi.org/10.58079/ov1l


6 thoughts on “Additional thoughts about ‘Lorenz curves’ to compare models”

  1. I am using this system for risk segmentation in insurance.
    Gini Index is necessary to understand which are the insured that surely will not ask indemnity, but also, they will have a significant difference between premium and loss (negative for the insurance). The firsts one is profit for the insurance. The seconds one are sure losses.
    There is a crucial aspect to will make to consider mentioned by Frees, that is: how many information does the insured have that the insurer does not have (adverse selection)?
    Relying on this:
    1) the Gini index is an approximation of the profit of the insurance.
    2) greater risk segmentation more significant profit but insured with lower risks could go towards more “advantageous” insurance.
    The Gini index makes it possible to make assessments between different pricing methods (E.g., through different types of regressions: Boosting, LASSO, etc.).
    The criterion of the Mini-Max is not yet clear to me; we hope in someone who explains it to me.

  2. Hi there,

    Interesting blog.
    I’m looking into this for doing model comparison of insurance ratemaking, but still have some troubles explaining them clear.
    What’s your view on these type of plots nowadays? Do you think they can be used for applying model selection in regression and if no, do you have other suggestions or interesting reads I could consider for this? By the way, the link “Progressive’s slides” just takes you to a homepage somewhere, not the actual slides..

    Thanks a lot for any advice!

  3. In all cases I’ve used or seen other people use GINI/Lorenz curves: a model is assessed relative to another model and both are compared to the line of equality, not to the ‘perfect pricing’ line.

    It is interesting know to know that data with large variations could cause the perfect model to be less than the line of equality, however I don’t know of any goodness-of-fit test that is 100% immune to this.

  4. This looks a lot like a Lift Curve. The best way to judge your results is to plot your second graph and then overlay your curve: it needs to be outside of the diagonal’s CI. (In my office, we call it “outside of the banana”.) Or maybe I’m misunderstanding your graphs.

    1. that’s possible… proper definition I can find only of Lift Curves are only for classifiers. If you have a nice reference about lift curves for regression models (or regression trees, random forests, etc), I am interested !

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.