Tag Archives: credit

Is there discrimination against the poor?

(With Laurence Barry, we wrote a short article on discrimination against the poor, in French)

In 2013, Martin Hirsch (former director of Emmaüs and Assistance publique – Hôpitaux de Paris) stated “it’s getting expensive to be poor”. This reality was confirmed by a recent study, in France, by the Banque Postale*, which showed that on average, a poor household has to pay 1,500 euros more each year to access the same goods and services as a better-off household, introducing a “double poverty penalty**”. Unfortunately, insurance is not left out: the use of credit scores in many countries reinforces what could be a form of discrimination against the poor. In the United States, where the practice is more common than elsewhere, a commission of inquiry recently tried to explain the link between credit score and claims frequency: is it because, as some insurers argue, people with lower scores are also less careful than others? Or is it because, having less financial means than others, they more naturally ask to be compensated for the losses incurred? And, in this second case, are we not also making them pay twice for their condition?

From excellence to wealth as a virtue

On what criteria do we admire people? For the Greeks, excellence, or arête (ἀρετή), was a major virtue. This excellence went beyond moral excellence: in the Greco-Roman world, the term evoked a form of nobility, recognizable by the beauty, strength, courage, or intelligence of the person. Now this excellence had little to do with wealth: thus Herodotus is astonished that the winners of the Olympian games were content with an olive wreath and a “glorious renown,” peri-bones (περὶ ἀρετῆς). In the Greek ethical vision, especially among the Stoics, a “good life” does not depend on material wealth – a precept pushed to its height by Diogenes who, seeing a child drinking from his hands at the fountain, throws away the bowl he had for all crockery, telling himself that it is again useless wealth.

Greek society is nevertheless a deeply hierarchical society, even if it is organized around values other than material wealth. We can then ask ourselves at what point in Western history wealth became the measure of all things. One thinks then of Max Weber’s theory: the ethics of Protestantism pushes for work and earthly success as a revelation of a divine election to come: the rich of this world would be the chosen of the next. In the same way Adam Smith, taking a critical look at the birth of capitalism in the society of his time, titles a chapter of The Theory of Moral Sentiments (1759) “Of the corruption of our moral feelings occasioned by that disposition to admire the rich and great, and to despise or neglect the poor and lowly.”

Today, the cult of wealth seems to have never been so strong and material success is almost elevated to the rank of virtue. On the other hand, poverty becomes a stigma that is hard to get rid of; but history shows us that this is not natural.

From “good” to “bad” poor

Indeed, the poor have not always been “bad”. As Fulconis & Kikuchi (2017) remind us, the Church has largely contributed to disseminating the image of the “good poor”, as it appears in the Gospels: “happy are you poor, the kingdom of God is yours”; or “God could have made all men rich, but he wanted there to be poor people in this world, so that the rich would have an opportunity to redeem their sins”. Beyond this, the poor is seen as an image of Christ, Jesus having said “whatever you do to the least of these, you will do to me”. Helping the poor, doing a work of mercy, is a means of salvation.

For Saint Thomas Aquinas, charity is thus essential to correct social inequalities by redistributing wealth through almsgiving***. In the Middle Ages, merchants were seen as useful, even virtuous, since they allowed wealth to circulate within the community. Priests played the role of social assistants, helping the sick, the elderly and the disabled. The hospices and xenodochia of the Middle Ages (ξενοδοχεῖον, that “place for strangers,” ξένος) are the symbol of this care of the poor. And quite often, poverty is not limited to material capital, but also social and cultural, to use a more contemporary terminology.

Towards the end of the Middle Ages, the figure of the “bad poor”, the parasitic and dangerous vagabond, appeared. In line with Weber, Todeschini (2021) insists on the increasing value attached to work and social “usefulness”. Brant (1494), the first, begins to denounce these welfare recipients, “some become beggars at an age when, young and strong, and in full health, one could work: why bother”. For Fulconis & Kikichi (2017), this mistrust is reinforced with the great pandemic of the Black Death. Colombi (2020) returns to this turning point, at the end of the Middle Ages, when in the cities, the bourgeois closed their districts with chains, to avoid that “poor and foreigners” settle there. The hygienic theories of the end of the 19th century added the final touch: if fevers and diseases were caused by insalubrity and poor living conditions, then by keeping the poor out, they were protected from disease.

Poor… by choice?

In the words of Mollat (2006) “the poor are those who, permanently or temporarily, find themselves in a situation of weakness, dependence, humiliation, characterized by the deprivation of means, variable according to the times and the societies, of power and social consideration”. Recently, Cortina (2022) proposed the term “aporophobia”, or “pauvrophobia”, to describe a whole set of prejudices that exist towards the poor. The unemployed are said to be welfare recipients and lazy, says Lamy; it is also the famous “where there is a will there is a way”, (which can be found in contemporary expressions such as “those who don’t want to do anything, those who don’t want to work” or “I’ll cross the street and find you a job”). And as is often the case, these prejudices, which stigmatize a group, “the poor”, lead to fear or hatred, generating an important cleavage, and finally a form of discrimination. Cortina’s (2022) “pauvrophobia” is a discrimination against social precariousness, which would be almost more important than “usual” forms of discrimination, such as racism or xenophobia. Cortina ironically notes that rich foreigners are often not rejected.

But these prejudices also turn into accusations. Szalavitz (2017) thus abruptly asks the question, “Why do we think poor people are poor because of their own bad choices?”. The “actor-observer” bias provides one element of an answer: we often think that it is circumstances, which constrain our own choices, but that it is the behavior of others that changes theirs. In other words, others are poor because they made bad choices, but if I am poor, it is because of an unfair system. This bias is also valid for the rich: winners often tend to believe that they got where they are by their own hard work, and that they therefore deserve what they have.

Social science studies show, however, that the poor are rarely poor by choice, and increasing inequality and geographic segregation do not help. The lack of empathy then leads to more polarization, more rejection and, in a vicious circle, even less empathy.

Links between wealth and risk(s)

To discriminate is to distinguish (exclude or prefer) a person because of his/her “personal characteristics”. Can we then speak of discrimination against the poor? Is poverty (like gender or skin color) a personal characteristic? In Quebec, “social condition” (which explicitly includes poverty) is one of the protected variables and therefore prohibited discrimination. This is not the case in France. As Barry & Charpentier (2021) remind us, when actuaries calculate a premium, discrimination directly linked to risk, and provided that the variable is not protected, is generally seen as legitimate. However, it is well known that wealth or social status has a lot to do with risk, whatever it may be. At the global level, Denis Hatzfeld reminds us that “earthquakes are much more deadly in poor countries than in developed countries, which have gradually learned to protect themselves from them. Similarly, Le Hir (2010) states that “A schoolboy is 400 times more likely to die in an earthquake in Kathmandu than in Tokyo”.

This is true for most risks. In France, we find in the deaths due to road accidents 3% of executives and 15% of workers, while they represent nearly 20% of the working population each, according to ONISR (2022). Blanpain (2018) points out that the gap in life expectancy at birth is 13 years between the most affluent and the most modest men. Recently, Allain (2022) noted that the most modest French people, at comparable age and sex, had almost three times more diabetes, twice as much liver or pancreatic disease, 1.6 times more chronic respiratory disease, etc. than the average. Cambois, Laborde and Robine (2008) similarly noted that the number of years of disability for blue-collar workers is also much higher, over a shorter life span on average.

The use of credit scores in insurance

In North America, companies such as Experian, Equifax and TransUnion keep records of the borrowing and repayment activities of all individuals with bank accounts. FICO (Fair Isaac Corporation) offers a formula to convert these records into a score, the credit score. This score is a function of debt and available credit, income and its variations, and history of incidents, bankruptcies or simple delinquencies. It is often seen as an assessment of a person’s creditworthiness, or the likelihood that he or she will repay debts. It is by nature closely related to income (Crowe 2022), making the credit score a robust proxy for wealth. Fourcade and Healy (2013) show that, as a good credit score has become a necessary condition for obtaining credit and maintaining purchasing power, this system has come to create an impenetrable wall between advantaged and disadvantaged classes. In a sense, a bad credit score becomes a self-fulfilling prophecy: people with a bad score (and therefore considered high risk by banks) become dependent on short-term alternatives. This increases the costs of future financing, thus the probability of default (François 2021) but also the probability of not finding a job (this score can be requested by employers****). “Using credit scores to punish the poor exacerbates existing socioeconomic inequalities,” Wang (2018) thus aptly asserts.

As an example, Table 1 compares a few parameters based on people’s credit scores, including the rate obtained for a $150,000 loan over a 30-year horizon, and the average insurance premium charged for car insurance (for a 30-year-old driver, driving 20,000km per year, in the city).

Table 1: Actual price of a $150,000 loan and the amount of a car insurance premium (for comparable coverage and risk profile), based on credit score (ranging from 300 to 850). Source: InCharge Debt Solutions.

Kiviat (2019) has extensively studied the use of credit scores in the pricing of auto insurance in the United States. The US regulator has indeed looked at the proven link between bad credit score and auto risk. What explanation can be given for this correlation? Wouldn’t the score be an indicator of poverty and not a proxy for the driver’s prudence as insurers claim? For if social condition is not a protected variable as it is in Canada, it is still largely associated with skin color, which is a prohibited variable. Discriminating on the basis of credit score could therefore amount to prohibited racial discrimination. By examining the debates around these issues, Kiviat highlights the ethical complexity of using facially neutral variables. And if, as noted above, poor living conditions increase risk in general, it is worth asking whether insurance is not helping to apply the double whammy that Martin Hirsch spoke of to the poor.

References

Allain, S. (2022). Les maladies chroniques touchent plus souvent les personnes modestes et réduisent davantage leur espérance de vie. DREES, 1243
Blanpain, N. (2018). L’espérance de vie par niveau de vie : chez les hommes, 13 ans d’écart entre les plus aisés et les plus modestes. Insee, Insee Première, 1687.
Bourdelais, P. (2001) Les hygiénistes: enjeux, modèles et pratiques. Éditions Belin.
Brant, S. (1494). La Nef des fous (Das Narrenschiff).
Cambois, E., Laborde, C & Robine, J.M. (2008). La “double peine” des ouvriers : plus d’années d’incapacité au sein d’une vie plus courte. Population & Sociétés, n° 441.
Caplovitz, D. (1963). Poor pay more; consumer practices of low-income families. Free Press.
Colombi, D., (2020). Où va l’argent des pauvres. Payot.
Cortina, A. (2022). Aporophobia: Why We Reject the Poor Instead of Helping Them. Princeton University Press.
Crowe, A. (2022). The Relationship Between Income and Credit Score. Credit Sesame Personal Finance and Credit Survey,
Fourcade, M. & Healy, K. (2013). Classification situations: Life-chances in the neoliberal era, Accounting, Organizations and Society, 38 (8): 559–572
François, P. (2021). Catégorisation, individualisation. Retour sur les scores de crédit. Chaire PARI, WP #24.
Fulconis, M. & Kikuchi, C. (2017) Vu du Moyen Âge : du « bon pauvre » au « mauvais pauvre ». The Conversation.
Grossetête, M. (2012) Accidents de la route et inégalités sociales. Les morts, les médias et l’État, Éditions du Croquant.
Kiviat, B. (2019). The moral limits of predictive practices: The case of credit-based insurance scores. American Sociological Review, 84(6), 1134-1158.
Lamy, T. (2022) Assistés, paresseux… pour 50% des Français, les chômeurs sont responsables de leur situation. Capital, décembre 2022
Lauer,J. (2017) Creditworthy: A History of Consumer Surveillance and Financial Identity in America, Columbia University Press.
Le Hir, P. (2010). Catastrophes et pauvreté, la double peine. Le Monde, 22 janvier,
Merton, R. (1968). The Matthew effect in science, Science, vol. CLIX, n° 3810
Mollat, M. (2006). Les pauvres au moyen age (Vol. 11). Éditions Complexe.
ONISR (2022). La sécurité routière en France, bilan de l’accidentalité de l’année 2021. https://bit.ly/3QmuR8H
Pratchett, (1993) Men at Arms, 15ème tome du cycle Discworld, Victor Gollancz Ed.
Smith, A. (1759). La Théorie des sentiments moraux. Presses Universitaires de France, Quadrige.
Szalavitz, M. (2017). Why do we think poor people are poor because of their own bad choices. The Guardian, 5 Juillet
Todeschini, G. (2021) Moyen Âge. La pauvreté a-t-elle un sens ? L’Histoire, février 2021.
Wang, J. (2018), Carceral Capitalism, MIT Press.
Weber, M. (1990 [1904]). L’éthique protestante et l’esprit du capitalisme, Pocket.

* Study entitled “Study of the double poverty penalty in France“, published at the end of 2022 by Action Tank Entreprise & Pauvreté, Boston Consulting Group and the Banque Postale

** This phenomenon, widely studied in the 1960s, see Caplovitz (1963), is known in economics as the “boot theory” (popularized by Pratchett’s novel (1993)), “take boots, for example. A really good pair of leather boots costs fifty dollars. But an affordable pair of boots, which were sort of OK for a season or two and then leaked like hell when the cardboard gave out, cost about ten dollars (…). But the thing was that good boots lasted for years and years.”

*** The notion of redistribution can be contrasted with the “Matthew effect” as defined by Merton (1950). Inspired by a passage from the Gospel according to St. Matthew (which he reverses), he states that “to him who has shall be given, and he shall have plenty; but to him who has not, even that which he has shall be taken away.”

**** This rating practice is actually not new and dates back to the late 19th century: Josh Lauer (2017) shows that as early as 1870, so well before big data or even credit cards, US banks employed assessors to make financial strength reports on people. For Lauer, it is a gigantic surveillance system that was set up at the beginning of the 20th century, leading to the algorithmic credit scores as we know them today (see also François (2021)).

Classification on the German Credit Database

In our data science course, this morning, we’ve use random forrest to improve prediction on the German Credit Dataset. The dataset is

> url="http://freakonometrics.free.fr/german_credit.csv"
> credit=read.csv(url, header = TRUE, sep = ",")

Almost all variables are treated a numeric, but actually, most of them are factors,

> str(credit)
'data.frame':	1000 obs. of  21 variables:
 $ Creditability   : int  1 1 1 1 1 1 1 1 1 1 ...
 $ Account.Balance : int  1 1 2 1 1 1 1 1 4 2 ...
 $ Duration        : int  18 9 12 12 12 10 8  ...
 $ Purpose         : int  2 0 9 0 0 0 0 0 3 3 ...

(etc). Let us convert categorical variables as factors,

> F=c(1,2,4,5,7,8,9,10,11,12,13,15,16,17,18,19,20)
> for(i in F) credit[,i]=as.factor(credit[,i])

Let us now create our training/calibration and validation/testing datasets, with proportion 1/3-2/3

> i_test=sample(1:nrow(credit),size=333)
> i_calibration=(1:nrow(credit))[-i_test]

The first model we can fit is a logistic regression, on selected covariates

> LogisticModel <- glm(Creditability ~ Account.Balance + Payment.Status.of.Previous.Credit + Purpose + 
Length.of.current.employment + 
Sex...Marital.Status, family=binomial, 
data = credit[i_calibration,])

Based on that model, it is possible to draw the ROC curve, and to compute the AUC (on ne validation dataset)

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> library(ROCR)
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog1=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog1,"\n")
AUC:  0.7340997

An alternative is to consider a logistic regression on all explanatory variables

> LogisticModel <- glm(Creditability ~ ., 
+  family=binomial, 
+  data = credit[i_calibration,])

We might overfit, here, and we should observe that on the ROC curve

> fitLog <- predict(LogisticModel,type="response",
+                   newdata=credit[i_test,])
> pred = prediction( fitLog, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCLog2=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCLog2,"\n")
AUC:  0.7609792

There is a slight improvement here,  compared with the previous model, where only five explanatory variables were considered.

Consider now some regression tree (on all covariates)

> library(rpart)
> ArbreModel <- rpart(Creditability ~ ., 
+  data = credit[i_calibration,])

We can visualize the tree using

> library(rpart.plot)
> prp(ArbreModel,type=2,extra=1)

The ROC curve for that model is

> fitArbre <- predict(ArbreModel,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitArbre, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCArbre=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCArbre,"\n")
AUC:  0.7100323

As expected, a single has a lower performance, compared with a logistic regression. And a natural idea is to grow several trees using some boostrap procedure, and then to agregate those predictions.

> library(randomForest)
> RF <- randomForest(Creditability ~ .,
+ data = credit[i_calibration,])
> fitForet <- predict(RF,
+                     newdata=credit[i_test,],
+                     type="prob")[,2]
> pred = prediction( fitForet, credit$Creditability[i_test])
> perf <- performance(pred, "tpr", "fpr")
> plot(perf)
> AUCRF=performance(pred, measure = "auc")@y.values[[1]]
> cat("AUC: ",AUCRF,"\n")
AUC:  0.7682367

Here this model is (slightly) better than the logistic regression. Actually, if we create many training/validation samples, and compare the AUC, we can observe that – on average – random forests perform better than logistic regressions,

> AUC=function(i){
+   set.seed(i)
+   i_test=sample(1:nrow(credit),size=333)
+   i_calibration=(1:nrow(credit))[-i_test]
+   LogisticModel <- glm(Creditability ~ ., 
+    family=binomial, 
+    data = credit[i_calibration,])
+   summary(LogisticModel)
+   fitLog <- predict(LogisticModel,type="response",
+                     newdata=credit[i_test,])
+   library(ROCR)
+   pred = prediction( fitLog, credit$Creditability[i_test])
+   AUCLog2=performance(pred, measure = "auc")@y.values[[1]] 
+   RF <- randomForest(Creditability ~ .,
+   data = credit[i_calibration,])
+   fitForet <- predict(RF,
+                       newdata=credit[i_test,],
+                       type="prob")[,2]
+   pred = prediction( fitForet, credit$Creditability[i_test])
+   AUCRF=performance(pred, measure = "auc")@y.values[[1]]
+   return(c(AUCLog2,AUCRF))
+ }
> A=Vectorize(AUC)(1:200)
> plot(t(A))

Correlations, dimension, and risk measure

Yesterday, while I was attending the IFM2 conference, at HEC Montreal, I heard a nice talk about credit risk, and a comparison between contagion (or at least default correlation), for corporate and retail companies (in the US). And it was mentioned that default correlation was much lower for retail companies than it could be for corporate risk. In a discussion that followed those slides, it was mentioned that banks in the US should actually have been working more with those small firms, since contagion risk was much lower.

A problem here is that the link between correlation, risk and dimension is rather complicated:

  • corporate means a small number of firms, high correlation (and possible large individual losses)
  • retail means a large number of firms (even perhaps extremely large), lower correlation (and small individual losses)

A simple model for default models is based on the assumption that we deal with an exchangeable portfolio (as in a previous post). With the following code, given an (individual) default probability, a default correlation, and a number of firms, it is possible to calculate the probability to have more than a given number of defaults.

 proba=function(s,a,m,n){
 b=a/m-a
 choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
 dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
 stop.on.error =  FALSE)$value}

CDF=function(x=10,r=.4,m=.1,n=50){
a=m*(1-r)/r ;
V=rep(NA,n+1)
 for(i in 0:n){
 V[i+1]=proba(i,a,m,n)}
 V=V/sum(V);
 return(sum(V[1:(x+1)])) }

It is possible to calculate, for a large range of correlations, the probability to have – at least – 20% of default in the portfolio (in order to compare things that are comparable).

R=seq(.01,.99,by=.01)
VQ=matrix(NA,length(A),2)
for(i in 1:length(A)){
VQ[i,1]=1-CDF(r=A[i],x=4,n=20);  
VQ[i,2]=1-CDF(r=A[i],x=200,n=1000)}

With 20 firms (corporate) we want to have at least 4 defaults, while with 1000 firms (retail) there should be 200 defaults. As mentioned in the previous post, the relationship between correlation and quantiles of sums is not simple. Hence, it might not be monotone. The dotted line is the probability to have at least 4 defaults when default correlation is 50% (around 10%). The plain line is the probability to have at least 200 defaults, as a function of the correlation,

plot(A,1-VQ[,2],type="l",col="red",ylim=c(0,.22))
abline(h=1-VQ[50,1],lty=2,col="red")

In that case, with only a correlation of 10% among retail firms, the probability of having 20% defaults is the same as the same probability for corporate, but with 50% correlation… One should remember that in portfolio analysis, the links between correlation, dimension and risk measure is a sensitive issue…

Exchangeability, credit risk and risk measures

Exchangeability is an extremely concept, since (most of the time) analytical expressions can be derived. But it can also be used to observe some unexpected behaviors, that we will discuss later on with a more general setting. For instance, in a old post, I discussed connexions between correlation and risk measures (using simulations to illustrate, but in the context of exchangeable risk, calculations can be performed more accurately). Consider again the standard credit risk problem, where the quantity of interest is the number of defaults in a portfolio. Consider an homogeneous portfolio of exchangeable risk. The quantity of interest is here

http://freakonometrics.hypotheses.org/files/2016/11/credit-01.gif

or perhaps the quantile function of the sum (since the Value-at-Risk is the standard risk measure). We have seen yesterday that – given the latent factor – http://freakonometrics.hypotheses.org/files/2016/11/exch67.gif (either the company defaults, or not), so that

http://freakonometrics.hypotheses.org/files/2016/11/exch66.gif

i.e. we can derive the (unconditional) distribution of the sum

http://freakonometrics.hypotheses.org/files/2016/11/exch60.gif

so that the probability function of the sum is, assuming that http://freakonometrics.hypotheses.org/files/2016/11/exch76.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch68.gif

Thus, the following code can be used to calculate the quantile function

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1,subdivisions=1000,
+ stop.on.error =  FALSE)$value
+ }
> QUANTILE=function(p=.99,a=2,m=.1,n=500){
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
+ V=V/sum(V)
+ return(min(which(cumsum(V)>p))) }

Now observe that since variates are exchangeable, it is possible to calculate explicitly correlations of defaults. Here

http://freakonometrics.hypotheses.org/files/2016/11/exch70.gif

i.e.

http://freakonometrics.hypotheses.org/files/2016/11/exch71.gif

Thus, the correlation between two default indicators is then

http://freakonometrics.hypotheses.org/files/2016/11/exch73.gif

http://freakonometrics.hypotheses.org/files/2016/11/exch75.gif

Under the assumption that the latent factor is beta distributed

http://freakonometrics.hypotheses.org/files/2016/11/exch78.gif

we get

http://freakonometrics.hypotheses.org/files/2016/11/exch80.gif

Thus, as a function of the parameter of the beta distribution (we consider beta distributions with the same mean, i.e. the same margin distributions, so we have only one parameter left, with is simply the correlation of default indicators), it is possible to plot the quantile function,

> PICTURE=function(P){
+ A=seq(.01,2,by=.01)
+ VQ=matrix(NA,length(A),5)
+ for(i in 1:length(A)){
+ VQ[i,1]=QUANTILE(a=A[i],p=.9,m=P)
+ VQ[i,2]=QUANTILE(a=A[i],p=.95,m=P)
+ VQ[i,3]=QUANTILE(a=A[i],p=.975,m=P)
+ VQ[i,4]=QUANTILE(a=A[i],p=.99,m=P)
+ VQ[i,5]=QUANTILE(a=A[i],p=.995,m=P)
+ }
+ plot(A,VQ[,5],type="s",col="red",ylim=
+ c(0,max(VQ)),xlab="",ylab="")
+ lines(A,VQ[,4],type="s",col="blue")
+ lines(A,VQ[,3],type="s",col="black")
+ lines(A,VQ[,2],type="s",col="blue",lty=2)
+ lines(A,VQ[,1],type="s",col="red",lty=2)
+ lines(A,rep(500*P,length(A)),col="grey")
+ legend(3,max(VQ),c("quantile 99.5%","quantile 99%",
+ "quantile 97.5%","quantile 95%","quantile 90%","mean"),
+ col=c("red","blue","black",
+"blue","red","grey"),
+ lty=c(1,1,1,2,2,1),border=n)
+}

e.g. with a (marginal) default probability of 15%,

> PICTURE(.15)

On this graph, we observe that the stronger the correlation (the more on the left), the higher the quantile… Note that the same graph can be plotted with on the X-axis the correlation,


Which is quite intuitive, somehow. But if the marginal probability of default decreases, increasing the correlation might decrease the risk (i.e. the quantile function),

> PICTURE(.05)

(with the modified code to visualize the quantile as a function of the underlying default correlation) or even worse,

> PICTURE(.0075)

And it because all the more counterintuitive that the default probability decreases ! So in the case of a portfolio of non-very risky bond issuers (with high ratings), assuming a very strong correlation will lower risk based capital !

de Finetti’s theorem and exchangeability

This week, we will start to work on multivariate models, and non-independence. The first idea to discuss non-independence will be to use the concept ofexchangeability. A sequence of random variable http://freakonometrics.blog.free.fr/public/perso5/exch-06.gif is said to be exchangeable if for all http://freakonometrics.blog.free.fr/public/perso5/exch-05.gif

http://freakonometrics.blog.free.fr/public/perso5/exch-01.giffor any permutation http://freakonometrics.blog.free.fr/public/perso5/exch-02.gif of http://freakonometrics.blog.free.fr/public/perso5/exch-03.gif. A standard example is the case wherehttp://freakonometrics.blog.free.fr/public/perso5/exch-07.gif, with

http://freakonometrics.blog.free.fr/public/perso5/exch-08.gifand

http://freakonometrics.blog.free.fr/public/perso5/exch-09.gifSince http://freakonometrics.blog.free.fr/public/perso5/exch-19.gif, a necessary condition is that

http://freakonometrics.blog.free.fr/public/perso5/exch-11.gifi.e.

http://freakonometrics.blog.free.fr/public/perso5/exch-12.gif
Since this inequality should hold for all http://freakonometrics.blog.free.fr/public/perso5/exch-05.gif it comes that necessarily http://freakonometrics.blog.free.fr/public/perso5/exch-13.gif.
de Finetti (1931): Let http://freakonometrics.blog.free.fr/public/perso5/exch-06.gif be a sequence of random variables with values in http://freakonometrics.blog.free.fr/public/perso5/exch-14.gifhttp://freakonometrics.blog.free.fr/public/perso5/exch-06.gif is exchangeable if and only if there exists a distribution function http://freakonometrics.blog.free.fr/public/perso5/exch-15.gif on http://freakonometrics.blog.free.fr/public/perso5/exch-16.gif such that

http://freakonometrics.blog.free.fr/public/perso5/credit-04.gifwhere http://freakonometrics.blog.free.fr/public/perso5/exch-20.gif. Note that http://freakonometrics.blog.free.fr/public/perso5/exch-15.gif is the distribution function of random variable

http://freakonometrics.blog.free.fr/public/perso5/exch-22.gifA nice proof of that result can be found in Heath & Sudderth (1995) – see alsoSchervish (1995)Chow & Teicher (1997) or Durrett (2010) and also probably in several bayesian books because that result has a strong interpretation in bayesian inference (as far as I understood, see e.g. Jaynes (1982)).
From the exchangeability condition, for any permutation http://freakonometrics.blog.free.fr/public/perso5/defi02.gif of http://freakonometrics.blog.free.fr/public/perso5/defi03.gif,

http://freakonometrics.blog.free.fr/public/perso5/defi01b.gifthat can be inverted in

http://freakonometrics.blog.free.fr/public/perso5/defi05.gifThe idea is then to extend the size of the vector http://freakonometrics.blog.free.fr/public/perso5/defi09.gif, i.e. for all http://freakonometrics.blog.free.fr/public/perso5/defi07.gif, define

http://freakonometrics.blog.free.fr/public/perso5/defi10.gifso that, if we condition on http://freakonometrics.blog.free.fr/public/perso5/defi11.gif,

http://freakonometrics.blog.free.fr/public/perso5/defi08.gifbut since given the sum of components of http://freakonometrics.blog.free.fr/public/perso5/defi11.gif, all possible rearrangements of the ones among the http://freakonometrics.blog.free.fr/public/perso5/GPD11.gif elements are equally likely, we can write

http://freakonometrics.blog.free.fr/public/perso5/defi15.gifThe first idea is to work on the blue term, and to invocate a theorem of approximation of the hypergeometric distribution http://freakonometrics.blog.free.fr/public/perso5/defi17.gif to a binomial distribution http://freakonometrics.blog.free.fr/public/perso5/defi19.gif, when http://freakonometrics.blog.free.fr/public/perso5/defi50.gif becomes large. Then

http://freakonometrics.blog.free.fr/public/perso5/defi16.gifLet http://freakonometrics.blog.free.fr/public/perso5/defi20.gif and let http://freakonometrics.blog.free.fr/public/perso5/defi21.gif denote the cumulative distribution function of http://freakonometrics.blog.free.fr/public/perso5/defi28.gif.

http://freakonometrics.blog.free.fr/public/perso5/defi33.gifThe idea is then to write the sum as an integral, with respect to that distribution,

http://freakonometrics.blog.free.fr/public/perso5/defi30.gifThe theorem is then obtained since http://freakonometrics.blog.free.fr/public/perso5/defi31.gif, i.e.

http://freakonometrics.blog.free.fr/public/perso5/defi32.gifIn the case of non-binary sequences, there is an extension of the previous result,
Hewitt & Savage (1955): Let http://freakonometrics.blog.free.fr/public/perso5/exch-06.gif be a sequence of random variables with values in http://freakonometrics.blog.free.fr/public/perso5/exch-24.gif.  http://freakonometrics.blog.free.fr/public/perso5/exch-06.gif is exchangeable if and only if there exists a measure http://freakonometrics.blog.free.fr/public/perso5/exch-25.gif on http://freakonometrics.blog.free.fr/public/perso5/exch-26.gif such that

http://freakonometrics.blog.free.fr/public/perso5/exc99.gifwhere http://freakonometrics.blog.free.fr/public/perso5/exch-25.gif is the measure associated to the empirical measure

http://freakonometrics.blog.free.fr/public/perso5/exch-29.gifand

http://freakonometrics.blog.free.fr/public/perso5/exc98.gifFor instance, in the Gaussian case mentioned earlier, if

http://freakonometrics.blog.free.fr/public/perso5/exch-23.gifthen

http://freakonometrics.blog.free.fr/public/perso5/exch-30.gifwhere

http://freakonometrics.blog.free.fr/public/perso5/exch-31.gifi.e. conditionally on http://freakonometrics.blog.free.fr/public/perso5/exch-32.gif, the http://freakonometrics.blog.free.fr/public/perso5/exch-06.gif are conditionally independent, with distribution http://freakonometrics.blog.free.fr/public/perso5/exch-33.gif. The proof can be found in Kingman (1978) and is based on martingale arguments.
Note that in the Gaussian case, http://freakonometrics.blog.free.fr/public/perso5/excccc02.gif where http://freakonometrics.blog.free.fr/public/perso5/exccc03.gif are i.i.d. random variables. To go further on exchangeability and related topics, see Aldous (1985)  (see also here).
This construction can be used in credit risk, to model defaults in an homogeneous portfolio, see e.g. Frey (2001),

 

Assuming a Beta distribution for the latent factor, we can derive the probability distribution of the sum

http://freakonometrics.blog.free.fr/public/perso5/credit-01.gifSince

http://freakonometrics.blog.free.fr/public/perso5/exch61.gifif we assume that – given the latent factor – http://freakonometrics.blog.free.fr/public/perso5/exch67.gif (either the company defaults, or not),

http://freakonometrics.blog.free.fr/public/perso5/exch66.gifi.e.

http://freakonometrics.blog.free.fr/public/perso5/exch63.gifThus, we can derive the (unconditional) distribution of the sum

http://freakonometrics.blog.free.fr/public/perso5/exch60.gifi.e.

http://freakonometrics.blog.free.fr/public/perso5/exch68.gif 

> proba=function(s,a,m,n){
+ b=a/m-a
+ choose(n,s)*integrate(function(t){t^s*(1-t)^(n-s)*
+ dbeta(t,a,b)},lower=0,upper=1)$value
+ }

Based on that function, it is possible to plot the probability distribution over http://freakonometrics.blog.free.fr/public/perso5/credit-5.gif. In the upper corner is plotted the density of the Beta distribution.

> a=2
> m=.2
+ n=10
+ V=rep(NA,n+1)
+ for(i in 0:n){
+ V[i+1]=proba(i,a,m,n)}
> barplot(V,names.arg=0:10)

http://freakonometrics.blog.free.fr/public/perso5/exchangeable-beta.gif

Those two theorems are extremely close,

De Finetti’s theorem: a random sequence http://freakonometrics.blog.free.fr/public/perso5/dfhs1.gif of http://freakonometrics.blog.free.fr/public/perso5/dfhs4.gif random variables is exchangeable if and only if http://freakonometrics.blog.free.fr/public/perso5/dfhs2.gif‘s are conditionnally independent, conditionnally on some random variable http://freakonometrics.blog.free.fr/public/perso5/dfhs3.gif.

Hewitt-Savage’s theorem: a random sequence http://freakonometrics.blog.free.fr/public/perso5/dfhs1.gif is exchangeable if and only if http://freakonometrics.blog.free.fr/public/perso5/dfhs2.gif‘s are conditionnally independent, conditionnally on some sigma-algebra http://freakonometrics.blog.free.fr/public/perso5/dfhs5.gif

Olshen (1974), proposed an interesting discussion about those theorems, see also in the Encyclopedia of Statistical Science,

http://freakonometrics.free.fr/copecran1.png

The subtle difference between those two theorem is also discussed in Freedman (1965)

http://freakonometrics.free.fr/copecran2.png

Pauvres américains

Pendant nos vacances dans l’Aubrac avec des amis, Christian avait acheté Libé, et machinalement, j’ai entrepris de le survoler le lendemain matin (en sirotant mon café). Je suis tombé sur le paragraphe suivant qui a retenu mon attention pendant plusieurs jours…

L’auteur n’est pas n’importe qui, puisqu’il s’agit de Kenneth Rogoff (ici), grand spécialiste de l’économie américaine. Relisons la phrase afin de mieux comprendre ce qu’il dit: pour “25% des propriétaires immobiliers aux États-Unis” […] “la valeur de leur maison serait inférieure à leur crédit immobilier” 1. Je me permettrais de réécrire la phrase sous la forme suivante “pour un quart des propriétaires immobiliers américains n’ayant pas fini de rembourser leur crédit, la vente de leur maison ne leur permettrait pas de rembourser leur crédit” (c’est en tous les cas comme ça que je la comprends).
Cette petite phrase pourrait être intéressante, En tous les cas, elle semble importante dans l’argumentation visant à expliquer que les américains sont beaucoup trop endettés2. Mais 25%, en quoi est-ce vraiment exceptionnel ou incroyable voire inquiétant ? C’est quoi le pourcentage acceptable ou normal que l’on s’attendrait à avoir ?
N’ayant pas de statistiques sur le sujet, faisons des calculs.

  • un peu de calcul d’actualisation de crédits

Car intuitivement, si un acheteur emprunte avec un faible apport, et sur une longue durée, son crédit va lui coûter cher, éventuellement plus cher que la maison. Au moins au début. Car avec le temps, la valeur du crédit diminue, alors que le prix de la maison, habituellement, augmente.
Considérons une maison de valeur 1 (a l’achat, histoire de simplifier, et de raisonner en pourcentage pour l’apport initial, par exemple). On dispose d’un capital initial http://freakonometrics.blog.free.fr/public/perso3/credit-03.gif (correspondant à l’apport), on contracte un crédit pour une durée http://freakonometrics.blog.free.fr/public/perso3/credit-02.gif, et on suppose que le taux pour le crédit est http://freakonometrics.blog.free.fr/public/perso3/credit-04.gif, et que le taux d’inflation est http://freakonometrics.blog.free.fr/public/perso3/credit-06.gif (la valeur de la maison peut augmenter dans le temps, mais aussi éventuellement baisser si la valeur de http://freakonometrics.blog.free.fr/public/perso3/credit-06.gif est négative). A la date http://freakonometrics.blog.free.fr/public/perso3/credit-01.gif, à son actif, le propriétaire possède la maison, d’une valeur http://freakonometrics.blog.free.fr/public/perso3/credit-09.gif (ce qu’il touche s’il revend la maison, si l’on oublie les frais associés); et au passif, il doit rembourser à la banque un montant http://freakonometrics.blog.free.fr/public/perso3/credit-08.gif, où http://freakonometrics.blog.free.fr/public/perso3/credit-07.gif est le montant des remboursement effectues tous les ans, i.e. solution de

http://freakonometrics.blog.free.fr/public/perso3/credit-11.gif

Si on veut faire les choses proprement, il faudrait intégrer les frais de notaire (disons 7% de la valeur de la maison), ici notés http://freakonometrics.blog.free.fr/public/perso3/credit-10.gif,

http://freakonometrics.blog.free.fr/public/perso3/credit-12.gif

La valeur de la maison est inférieure à la valeur du crédit si

http://freakonometrics.blog.free.fr/public/perso3/credit-13.gif

(les frais de notaires étant payés à l’achat comme on l’évoquait auparavant, mais aussi en cas de revente3). On peut faire le calcul facilement, sous R,

valeur = function(t,T,a,r=.05,i=0,delta=.07){
k=(1-a+delta)/sum(1/(1+r)^(0:(T-1)))
s=(1+i)^t
v=(T-t)*k
return(c(s*(1-delta),v))}

Par exemple, si http://freakonometrics.blog.free.fr/public/perso3/credit-01.gif est nul, on compare la valeur du crédit à la valeur de la maison au moment de l’achat. Pour quelqu’un ayant un apport de 25%, prenant un crédit avec 20 échéances (sur 20 ans) que l’on commence à rembourser le jour de la signature, la valeur de son crédit (sur une somme empruntée de 0.75) est de 1.2533, environ, si le taux de crédit est de l’ordre de 5%. C’est plus que la valeur (brute) de la maison (ici 1), voire beaucoup plus que ce que rapporterait la revente la maison, qui rapporterait 0.93, ce qui ne lui permet pas de rembourser le crédit….

> valeur(0,20,.25,.05,0)
[1] 0.9300 1.2533

Sur la même durée, vu après la 5ieme échéance (i.e. au bout de 25% des échéances) avec toujours un apport initial de 25%, la valeur du crédit restant à payer à la banque est de l’ordre de 0.94, c’est à dire à peu de choses près, la valeur de revente de la maison s’il n’y a pas d’inflation (ou de perte de valeur du bien immobilier).

> valeur(5,20,.25,.05,0)
[1] 0.9300000 0.9399846

Autrement dit, dans un monde avec une inflation nulle, avec des cohortes d’acheteurs constantes dans le temps, qui prendraient des crédits de 20 ans avec un apport de 25% de la valeur de la maison, 25% des emprunteurs ont, en moyenne, un crédit à rembourser supérieur à la valeur de revente de leur maison, comme le dit l’article. Cette proportion augment

  • quand les taux d’emprunt augmentent
  • quand la durée des emprunts augmente
  • quand l’apport initial diminue

Mais on peut essayer de visualiser tout ça,

  • visualisation des valeurs du crédit, et de la maison
dessin=fonction(T=20,a=.333,r=.05,i=.02,delta=.07){
S=V=rep(NA,T)
for(j in 1:T){
S[j]=valeur(j-1,T,a,r,i)[1]
V[j]=valeur(j-1,T,a,r,i)[2]}
YL=range(S,V)
plot(1:T-.5,V,type="b",col="red",ylim=YL)
lines(1:T-.5,S,col="blue",type="b")
}

Comme on le voit sur le dessin ci-dessous, la proportion des acheteurs dont la valeur du crédit excède la valeur de la maison est d’environ 20% (même avec un apport non négligeable, ici un tiers, et une inflation non nulle, ici 2%). On le voit sur le graphique ci-dessous, avec en bleu la valeur de la maison, et en rouge, la valeur du crédit,

On peut d’ailleurs faire varier les différents paramètres, comme le taux d’emprunt, avec une baisse (passant de 5% à 3%),

ou avec une hausse (passant de 5% à 7%),

On peut aussi faire varier l’apport initial (passant à 50%),

On peut enfin supprimer l’inflation, et supposer que le prix de la maison n’augmente pas vraiment…

Moralité? 25% semble effectivement important, trop important (pour une économie en bonne santé). Mais il ne faut pas se leurrer, car un pourcentage raisonnable (ou viable) semblerait être davantage aux alentours de 15% que de 0%.

  • du crédit immobilier au crédit automobile

Et cela dit, 25% serait un pourcentage relativement faible si on regardait non pas les crédit immobiliers, mais les crédit automobiles. Car par rapport à la situation précédente, on est dans un cas où les taux sont élevés, et où la valeur du bien ne cesse de se dégrader. Par contre la durée est souvent plus courte. Une déflation de 10% n’est peut-être pas la meilleure modélisation qui soit de la perte de valeur du véhicule, mais en première approximation, ça devrait convenir,,,
Graphiquement, on a

Bref, dans le cas du crédit auto (où l’acheteur achèterait intégralement à crédit), dans une situation normale entre 70% et 80% des acheteurs de voiture à crédit sont dans une situation où la revente de leur voiture ne permettrait pas de rembourser leur crédit… Ne faudrait-il pas s’en inquiéter également ? Acheter à crédit un bien dont la valeur ne cesse de baisser, n’est-ce pas dangereux ?

1 au début de l’été, en discutant avec des couples d’amis, dont deux venaient d’avoir des postes de profs à l’autre bout de la France (et qui devaient revendre leur maison), j’ai été surpris de voir que quand ils parlaient de “ne pas perdre d’argent lors de la revente“, ils valorisaient la maison à partir du prix initial, auquel ils ajoutaient les frais de notaires, mais oubliaient le coût du crédit.
2 je me contenterais ici de discuter ce chiffre de 25%, et non pas de savoir si c’est grave que la revente de la maison ne permette pas de rembourser le crédit.
3 je préfère prendre en compte ces frais, car sinon, comme je l’avais déjà évoqué ici, l’achat d’une maison semble toujours une opération gagnante.