This afternoon, André did send me an interesting graph about the use of Lorenz curve in the context of insurance pricing (and modeling)
It is some sort of Lorenz curve, upside-down, with on the x-axis the proportion of the population, and on the y-axis the proportion of the losses. The important point is that the population is sorted according the their risk, i.e. their premium. The code to generate such a curve is actually quite simple,
L <- function(u,varx="premium",vary="losses"){ base=base[order(base[,varx],decreasing=TRUE),] base$cum=(1:nrow(base))/nrow(base) return(sum(base[base$cum<=u,vary])/ sum(base[,vary]))} vu=seq(0,1,by=.01) vv=Vectorize(function(u) L(u))(vu)
My concern was more on two labels on the figure, with on the top-left “perfect pricing” and on the first diagonal “average pricing“. What could that possibly mean? Is there even such a thing as a “perfect pricing“? In order to understand what we plot here, let us generate some dataset, and fit some model. Including things that might be seen as the “perfect model“: the price base on the parameters used to generate the data, and the model used to generate the data, fitted on the data.
But first of all, let us generate some data,
> n = 1e5 > X1 = runif(n) > X2 = 1+rexp(n) > s = -3+X1-X2 > p = exp(s)/(1+exp(s)) > m = exp(X2) > v = 2 > Y = (runif(n)<p)* rgamma(n,shape=m^2/v,scale=v/m) > base <- data.frame(Y,X1,X2)
There are here 10,000 observations, two covariates, a logistic model for the occurence of claim, and when there is a claim, a Gamma distribution with some log link function.
My first “perfect pricing” model is the one obtained using the probability used to generate the occurence of the claims, and the expected value of the loss size,
> base$p0 = p*m
The second “perfect model” is the one we can get but fitting the model we used to generate the dataset,
> sbase = subset(base,Y>0) > reg_freq=glm((Y>0)~.,data=base,family=binomial) > reg_cost=glm(Y~X2,data=sbase, family=Gamma(link="log")) > Freq = predict(reg_freq,newdata=base, type="response") > Cost = predict(reg_cost,newdata=base, type="response") > Premium = Freq*Cost > base$p1 = Premium
Here we have
> head(base) Y X1 X2 p1 p0 1 0 0.007109038 1.739082 0.04802496 0.04970442 2 0 0.014693911 1.548622 0.04849476 0.04998725 3 0 0.683403423 1.965448 0.09353131 0.09726497 4 0 0.929720222 2.273194 0.11897077 0.12453264 5 0 0.275401199 1.699431 0.06266035 0.06479587 6 0 0.811859695 2.026768 0.10611473 0.11049269
If we try to visualize the two Lorenz curve, use
> vu = seq(0,1,by=.01) > vv = Vectorize(function(u) L(u,"p1","Y"))(vu) > plot(vu,vv,type="l",col="red") > vv=Vectorize(function(u) L(u,"p0","Y"))(vu) > lines(vu,vv,type="l",col="blue")
It looks like our “perfect” curves are much closed to the diagonal than the one we had on the figure above. If we compute some sort of Gini index, we get less than 20%,
> sum(vv)/100*2-1 [1] 0.1821662
Before trying to see if we can improve it, maybe we can try to see what an “average pricing” could mean. If the connexion with Lorenz curve is valid, then it should be the “equalitarian” version, i.e. no segmentation.
> base$p3=mean(base$Y) > base=base[sample(1:nrow(base)),]
Here we sample because there will be some ties in the premium, when we will try to sort it, and Lorenz curve is
> vv=Vectorize(function(u) L(u,"p3"))(vu) > sum(vv)/100*2-1 [1] 0.04438427
We are not far away from the diagonal and observe that we have almost 5% for Gini index. Let us get back to the possible “perfect pricing“. Actually, from a modelling point of view (and not an actuarial one, I believe), the “perfect pricing” might be an a posteriori pricing : every one pays a premium equal to the actual loss.
> base$p4 = base$Y > vv = Vectorize(function(u) L(u,"p4","Y"))(vu) > lines(vu,vv,type="l",lwd=2)
Indeed, using that technique, we can reach the upper corner of the figure. But from an economic perspective, it cannot be seen as “perfect” since there is no longer risk pooling here. And from a statistician perspective, it might be perfect, but clearly with zero predictive power ! It can be seen as an heavily over-fitted model, that works perfectly on the training sample, but will have no predictive power at all on the validation sample.
Let us try something a bit more realistic. Like some machine learning algorithm, like a boosting one (since many people doing data science claim that boosting is usually a great technique, with great predictive performence).
> library(dismo) > reg=gbm.step(data=df, gbm.x = 2:3, gbm.y = 1, + family = "gaussian", tree.complexity = 5, + learning.rate = 0.001, bag.fraction = 0.5) GBM STEP - version 2.9 Performing cross-validation optimisation of a boosted regression tree model for Y and using a family of gaussian Using 400 observations and 2 predictors creating 10 initial models of 50 trees folds are unstratified total mean deviance = 0.2194 tolerance is fixed at 2e-04 ntrees resid. dev. 50 0.2145 now adding trees... 100 0.2094 150 0.2048 200 0.2006 250 0.1969
(etc)
2050 0.1693 2100 0.1695 fitting final gbm model with a fixed number of 1700 trees for Y mean total deviance = 0.219 mean residual deviance = 0.139 estimated cv deviance = 0.169 ; se = 0.014 training data correlation = 0.628 cv correlation = 0.495 ; se = 0.057 elapsed time - 0.1 minutes > base$p5 = predict(reg,newdata=df,n.trees=1700)
If we visualize the Lorenz curve, we get
> vv = Vectorize(function(u) L(u,"p5","Y"))(vu) > lines(vu,vv,type="l",lwd=2)
Here it looks like the “performence” of the boosting algorithm is the same as using an “average pricing“.
I still have some trouble analyzing those curves. I know that some insurance companies use that intensively, but the only article I could find online is a paper by Jed Frees with Glenn Meyers and Dave Cummins, Insurance Ratemaking and a Gini Index (recently published in the Journal of Risk and Insurance). If someone has more experience about those curves, and above the possible upper bound with a “realistic” model, that could be interpreted as “perfect“, I would be glad to hear about it…
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (July 28, 2015). “Improving Segmentation” (using Lorenz curves, or sort of). Freakonometrics. Retrieved March 24, 2025 from https://doi.org/10.58079/ov07
Also, I realize now the “pricing” labeling is very confusing. That’s because Gini curves are generally used for classification plans and the actual price is sort of irrelevant, only differences (or relativities) of prices across risk groups. Essentially the Gini answers the question “How well is the classification segregating good risks from bad ?” It does not so anything about the overall adequacy of rates.
My bad if that was mentioned in the post and I didnt get it.
P.S. Great blog!
“Perfect Pricing” is referring to the hypothetical ideal model using all predictive variables. Because of the natural variation in the data you will never predict losses perfectly which is why its not the same as the prem=loss line (Similar concept to r^2 where you can have r^2 =1.00 even though the line doesnt hit every data point).
Also, I’ve always seen the lorenz curve graphed below the line of equality. Not that it makes a difference.
There’s some more info on Gini Curves on the CAS site:
http://www.casact.org/search/?q=gini
thanks Brandon for the feedback
I currently discover those graph, but so far, I could not find any discussion about those, just as if the interpretation was so intuitive. About the ‘perfect pricing’, I still have a lot of questions, for instance “all predictive variables”, should it be “all predictive available variables” or “all predictive variables”. Risk can be related to some risk factor Z, but we can only access to some covariate X=(X1,….Xk), that would be a ‘good’ proxy for the risk. I guess we can compute, running all possible models, the best possible model using available covariates (i.e. getting m(x)=argmin{E[L(Y,m(X)]} as in machine learning techniques, e.g. m(x)=E[Y|X=x] in the context of a L2 loss function). But can we do better than that if we could get additional covariates ?
My understanding is that what we observe is m(Z)+noise. We would love to get m(Z) but we can only get m(X). And we get the red and the blue line. Including the noise, we have the curve in the top left area: we have a model that is clearly overfitting since ex-post pricing will never give a predictive model. But I have the feeling that, on real data, there is a difference between the ‘perfect pricing’, and the one we could get from any algorithm based on available data. This ‘perfect pricing’ should be between the black curve (too noisy) and the red/blue curves (only based on available covariates).
Thanks for the link. I found it hard to find references of that topic, since some people mention ‘gain curves’ or ‘lift curves’ or ‘Lorenz curves’ (and indeed, some invert the (x) and the (y) axis, are curves are below the first diagonal in some graphs). I will look at that carefully, and probably post some more posts on that topic. I really want to challenge those graphs, to see what’s going on. Keep you posted !
Looking at the straight line for the average pricing suggests the noisy claims sizes are not part of the chart. We can order the population by risk (assuming we know how to do it perfectly) and then plot the cumulative premiums vs. the cumulative premiums.
A constant premium with the average loss per person should give a straight line. Other implementable premium vectors will yield curves above it (as the premiums for riskier people will higher).
If any premium vector could be implementable, the expected loss per person would be a good vector and would yield the top curve.
Setting X1=10*runif(n) in the code, gets a curve similar to the chart.
indeed, if I change the distribution of the first variable, I get a Lorenz curve more in the upper corner, but in that case the ‘a posteriori’ premium is exactly at the same level as our pricing. Again, I find that graph puzzling, and difficult to interprete (I cannot include it in the comment, so I just add it in a link)
> n=1e5
> X1=runif(n)*10
> X2=1+rexp(n)
> s=-3+X1-X2
> p=exp(s)/(1+exp(s))
> m=exp(X2)
> v=2
> Y <- (runif(n)<p)*
rgamma(n,shape=m^2/v,scale=v/m)
> base <- data.frame(Y,X1,X2)
> sbase=subset(base,Y>0)
> reg_freq=glm((Y>0)~.,data=base,
family=binomial)
> reg_cost=glm(Y~X2,data=sbase,
family=Gamma(link=”log”))
> Freq=predict(reg_freq,newdata=base,
type=”response”)
> Cost=predict(reg_cost,newdata=base,
type=”response”)
> Premium=Freq*Cost
> base$p1=Premium
> base$p0=p*m
> vu=seq(0,1,by=.01)
> vv1=Vectorize(function(u) L2(u,”p1″))(vu)
> plot(vu,vv,type=”l”,col=”red”)
> base$p3=mean(base$Y)
> vv=Vectorize(function(u) L2(u,”p3″))(vu)
> lines(vu,vv,type=”l”,col=”blue”)
> base$p4=base$Y
> base=base[sample(1:nrow(base)),]
> vv=Vectorize(function(u) L2(u,”p4″))(vu)
> lines(vu,vv,type=”l”)
why isn’t the blue line straight?
here is the L2 function I used:
L2 <- function(u,varx="premium"){
base=base[order(base[,varx],decreasing=TRUE),]
base$cum=(1:nrow(base))/nrow(base)
return(sum(base[base$cum<=u,varx])/
sum(base[,varx]))}
yes, because on your x-axis you use the share of the premium; actually this is what I did in the first place, and I also have the feeling that this graph makes sense: a good model is on the diagonal. That means that 20% of the losses are actually paid by insured giving 20% of the premiums. I completely agree with you.
But it looks like people like to use the one I use in this blog, where on the x-axis, we plot the proportion of the insured. So 20% does not mean the more risky insured, paying 20% of the total earned premium, but 20% of the insured.
But again, I agree with you, it would make much more sense: the diagonal would be the ‘perfect model’ (or at least a fair one), while we can still compute the ‘non-segmented’ one, where every policyholder is paying the same premium.