GLMs: link vs. distribution

Usually, when I give a course on GLMs, I try to insist on the fact that the link function is probably more important than the distribution. In order to illustrate, consider the following dataset, with 5 observations

x = c(1,2,3,4,5)
y = c(1,2,4,2,6)
base = data.frame(x,y)

Then consider several model, with various distributions, and either an identity link (and in that case \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbf{x}^T\mathbf{\beta}) or a log link function (so that \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=e^{\mathbf{x}^T\mathbf{\beta}})

regNId = glm(y~x,family=gaussian(link="identity"),data=base)
regNlog = glm(y~x,family=gaussian(link="log"),data=base)
regPId = glm(y~x,family=poisson(link="identity"),data=base)
regPlog = glm(y~x,family=poisson(link="log"),data=base)
regGId = glm(y~x,family=Gamma(link="identity"),data=base)
regGlog = glm(y~x,family=Gamma(link="log"),data=base)
regIGId = glm(y~x,family=inverse.gaussian(link="identity"),data=base)
regIGlog = glm(y~x,family=inverse.gaussian(link="log"),data=base

One can also consider some Tweedie distribution, to be even more general

library(statmod)
regTwId = glm(y~x,family=tweedie(var.power=1.5,link.power=1),data=base)
regTwlog = glm(y~x,family=tweedie(var.power=1.5,link.power=0),data=base)

Consider the prediction obtained in the first case, with the linear link function

library(RColorBrewer)
darkcols = brewer.pal(8, "Dark2")
plot(x,y,pch=19)
abline(regNId,col=darkcols[1])
abline(regPId,col=darkcols[2])
abline(regGId,col=darkcols[3])
abline(regIGId,col=darkcols[4])
abline(regTwId,lty=2)

The predictions are very very close, aren’t they ? In the case of the exponential prediction, we obtain

plot(x,y,pch=19)
u=seq(.8,5.2,by=.01)
lines(u,predict(regNlog,newdata=data.frame(x=u),type="response"),col=darkcols[1])
lines(u,predict(regPlog,newdata=data.frame(x=u),type="response"),col=darkcols[2])
lines(u,predict(regGlog,newdata=data.frame(x=u),type="response"),col=darkcols[3])
lines(u,predict(regIGlog,newdata=data.frame(x=u),type="response"),col=darkcols[4])
lines(u,predict(regTwlog,newdata=data.frame(x=u),type="response"),lty=2)

We can actually look closer. For instance, in the linear case, consider the slope obtained with a Tweedie model (that will include all the parametric familes mentioned here, actually)

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base))$coefficients[2,1:2]
Vgamma = seq(-.5,3.5,by=.05)
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3,ylim=c(.965,1.03),xlab="power",ylab="slope")

The slope here is always very very close to one ! Even more if we add a confidence interval

plot(Vgamma,Vpente[1,])
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

Heuristically, for the Gamma regression, or the Inverse Gaussian one, because the variance is a power of the prediction, if the prediction is small (here on the left), the variance should be small. So, on the left of the graph, the error should be small with a higher power for the variance function. And that’s indeed what we observe here

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(-.1,.04),xlab="power",ylab="error")
abline(h=0,lty=2)

Of course, we can do the same with the exponential models

pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base))$coefficients[2,1:2]
Vpente = Vectorize(pente)(Vgamma)
plot(Vgamma,Vpente[1,],type="l",lwd=3)

or, if we add the confidence bands, we obtain

plot(Vgamma,Vpente[1,],ylim=c(0,.8),type="l",lwd=3,xlab="power",ylab="slope")
lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2)
lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2)

So here also, the “slope” is rather similar… And if we look at the error we make on the left part of the graph, we obtain

erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base),newdata=data.frame(x=1),type="response")-y[x==1] 
Verreur = Vectorize(erreur)(Vgamma)
plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(.001,.32),xlab="power",ylab="error")

So my point is that the distribution is usually not the most important point on GLMs, even if chapters of books on GLMs are distribution based… But as mentioned in an another post, if you consider a nonlinear transformation, like we have with GAMs, the story is more complicated…

Bailey (1963) and Poisson regression on two factors

Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,

base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE)
str(base)
'data.frame':	563 obs. of  9 variables:
 $ SEX         : int  1 0 0 1 1 0 0 1 0 1 ...
 $ AGE         : num  37 27 32 57 22 32 22 57 32 22 ...
 $ YEARMARRIAGE: num  10 4 15 15 0.75 1.5 0.75 15 15 1.5 ...
 $ CHILDREN    : int  0 0 1 1 0 0 0 1 1 0 ...
 $ RELIGIOUS   : int  3 4 1 5 2 2 2 2 4 4 ...
 $ EDUCATION   : int  18 14 12 18 17 17 12 14 16 14 ...
 $ OCCUPATION  : int  7 6 1 6 6 5 1 4 1 4 ...
 $ SATISFACTION: int  4 4 4 5 3 5 3 4 2 5 ...
 $ Y           : int  0 0 0 0 0 0 0 0 0 0 ...

Let us focus on two categorical covariates, related to the importance of religion, and the occupation

df=data.frame(y=base$Y,
              religion=as.factor(base$RELIGIOUS),
              occupation=as.factor(base$OCCUPATION),
              expo = 1)
(E=xtabs(expo~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1  8  4 16  9  0
       2 23  3 11 17 56 36  6
       3 29  1 10 12 39 25  2
       4 38  7 12 21 59 44  2
       5 13  1  3 10 19 19  3
(N=xtabs(y~religion+occupation,data=df))
        occupation
religion  1  2  3  4  5  6  7
       1  4  1 13  3 13  7  0
       2  1  1 13 10 25 43 10
       3 15  0 12 11 34 35  1
       4 24  1  3 15 11  9 10
       5  6  0  0  6 11  7  0

The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be

sum(N)/sum(E)
[1] 0.6305506

The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…

Consider here some starting values for A_i‘s and B_j‘s

A=rep(1,length(levels(df$religion)))
B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E)
A
[1] 1 1 1 1 1
B
[1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506

The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j

E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.5222025  0.6305506  5.0444050  2.5222025 10.0888099  5.6749556  0.0000000
       2 14.5026643  1.8916519  6.9360568 10.7193606 35.3108348 22.6998224  3.7833037
       3 18.2859680  0.6305506  6.3055062  7.5666075 24.5914742 15.7637655  1.2611012
       4 23.9609236  4.4138544  7.5666075 13.2415631 37.2024867 27.7442274  1.2611012
       5  8.1971581  0.6305506  1.8916519  6.3055062 11.9804618 11.9804618  1.8916519
sum(B*E[1,])
[1] 26.48313
sum(B*E[2,])
[1] 95.84369
apply(t(B*t(E)),1,sum)
        1         2         3         4         5 
 26.48313  95.84369  74.40497 115.39076  42.87744 
sum(A*E[,1])
[1] 107
sum(A*E[,2])
[1] 13
apply(A*E,2,sum)
  1   2   3   4   5   6   7 
107  13  44  64 189 133  13

From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively

A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
B=apply(N,2,sum)/apply(A*E,2,sum)

Let it iterate one thousand times

for(i in 1:1000){
  A=apply(N,1,sum)/apply(t(B*t(E)),1,sum)
  B=apply(N,2,sum)/apply(A*E,2,sum)
}

We obtain here

A
        1         2         3         4         5 
1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 
B
        1         2         3         4         5         6         7 
0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,

apply(N,1,sum)
  1   2   3   4   5 
 41 103 108  73  30 
apply(E * A%*%t(B),1,sum)
  1   2   3   4   5 
 41 103 108  73  30

as well as sums per colums

apply(N,2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21 
apply(E * A%*%t(B),2,sum)
  1   2   3   4   5   6   7 
 50   3  41  45  94 101  21

Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates

reg=glm(y~religion+occupation,data=df,family=poisson)
summary(reg)
Coefficients:
            Estimate Std. Error z value Pr(>|z|)    
(Intercept) -0.32604    0.21325  -1.529 0.126285    
religion2   -0.38832    0.18791  -2.066 0.038783 *  
religion3   -0.03829    0.18585  -0.206 0.836771    
religion4   -0.85470    0.19757  -4.326 1.52e-05 ***
religion5   -0.84233    0.24416  -3.450 0.000561 ***
occupation2 -0.57758    0.59549  -0.970 0.332083    
occupation3  0.59022    0.21349   2.765 0.005699 ** 
occupation4  0.43588    0.20603   2.116 0.034381 *  
occupation5  0.04265    0.17590   0.242 0.808399    
occupation6  0.50587    0.17360   2.914 0.003569 ** 
occupation7  1.27415    0.26298   4.845 1.27e-06 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

First of all, observe that the total sum of predictions equals the total sum of observations

yp = predict(reg,type="response")
sum(yp)
[1] 355
sum(df$y)
[1] 355

But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique

xtabs(yp~df$religion+df$occupation)
           df$occupation
df$religion          1          2          3          4          5          6          7
          1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
          2 11.2586112  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
          3 20.1450813  0.3898804 12.5342484 12.8899708 28.2722424 28.8008726  4.9677044
          4 11.6678703  1.2063307  6.6483904  9.9707300 18.9053460 22.4055332  2.1957997
          5  4.0413464  0.1744790  1.6827951  4.8070914  6.1639761  9.7955975  3.3347148
E * A%*%t(B)
        occupation
religion          1          2          3          4          5          6          7
       1  2.8870914  0.4050987 10.4188024  4.4643702 12.0516123 10.7730250  0.0000000
       2 11.2586111  0.8242113  9.7157637 12.8678376 28.6068235 29.2249717 10.5017811
       3 20.1450811  0.3898804 12.5342484 12.8899708 28.2722423 28.8008726  4.9677044
       4 11.6678702  1.2063307  6.6483904  9.9707299 18.9053460 22.4055332  2.1957997
       5  4.0413463  0.1744790  1.6827951  4.8070914  6.1639760  9.7955975  3.3347148

To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s

a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5]))
a/a[1]
          religion2 religion3 religion4 religion5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 
A/A[1]
        1         2         3         4         5 
1.0000000 0.6781979 0.9624329 0.4254098 0.4307072

but also for B_j‘s

b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11]))
b/b[1]
            occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 
  1.0000000   0.5612551   1.8043769   1.5463210   1.0435773   1.6584203   3.5756477 
B/B[1]
        1         2         3         4         5         6         7 
1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478

This will have major implications in non-life insurance models (for claims reserving).

Je code, donc je suis

Mercredi 21 novembre, l’édition 2018 du Colloque HumanIA se tiendra à l’Agora Hydro-Québec du Complexe des sciences Pierre-Dansereau de l’UQAM dès 9h30. Dans le cadre de la semaine La France à l’UQAM, le Colloque sera suivi d’un débat sur le thème «Intelligence artificielle : l’erreur n’est-elle qu’humaine ?», dans l’après midi. J’interviendrais pour ma part dans un atelier en avant midi, sur le theme, “je code, donc je suis“. Je mets quelques liens pour alimenter la discussion,

Rencontres Mutualistes

Lundi et mardi, je serais a Beaune, en Bourgogne, pour les premières rencontres mutualistes. On m’a demande d’intervenir en ouverture de la seconde journée, sur le thème “segmentation et mutualisation”.

Les slides sont dès à  présent en ligne. Comme j’ai peu de temps, je reviendrais sur les grands principes de la tarification et du rôle de l’actuaire. J’ai ensuite pense qu’une discussion autour du graphique suivant pourrait être intéressante, en particulier sur les deux bornes, inférieure (‘average pricing‘) et supérieure (‘perfect pricing‘)

On finira avec un rapide retour sur les pricing games, pour conclure.

The “probability to win” is hard to estimate…

Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.

Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have n students, and 50 questions

set.seed(1)
n=10
M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n)

Let X_{i,j} denote the score of student i at question j. Let S_{i,j} denote the cumulated score, i.e. S_{i,j}=X_{i,1}+\cdots+X_{i,j}. At step j, I can get some sort of prediction of the final score, using \hat{T}_{i,j}=50\times S_{i,j}/j. Here is the code

SM=apply(M,2,cumsum)
NB=SM*50/(1:50)

We can actually plot it

plot(NB[,1],type="s",ylim=c(0,50))
abline(h=25,col="blue")
for(i in 2:n) lines(NB[,i],type="s",col="light blue")
lines(NB[,3],type="s",col="red")


But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !

Let’s try to see how we can do it… If after j questions, the students has 25 correct answer, the probability should be 1 – i.e. if S_{i,j}\geq 25 – since he cannot fail. Another simple case is the following : if after j questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if S_{i,j}+(50-i+1)< 25 the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least 25-S_{i,j} correct answers, out of 50-j questions, when the probability of success is actually S_{i,j}/j. We recognize the survival probability of a binomial distribution. The code is then simply

PB=NB*NA
for(i in 1:50){
  for(j in 1:n){
    if(SM[i,j]&gt;=25) PB[i,j]=1
    if(SM[i,j]+(50-i+1)&lt;25)   PB[i,j]=0
    if((SM[i,j]&lt;25)&amp;(SM[i,j]+(50-i+1)&gt;=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i)
  }}

So if we plot it, we get

plot(PB[,1],type="s",ylim=c(0,1))
abline(h=25,col="red")
for(i in 2:n) lines(PB[,i],type="s",col="light blue")
lines(PB[,3],type="s",col="red")

which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !

Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),

If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed

PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…

Big Data and Artificial Intelligence

New week, I will be in France for a few days. On Monday and Tuesday, I will be in Beaune, in Burgundy, at the first “Rencontres Mutualistes” (I will upload the slides of my talk soon). And on Wednesday, I will be in Paris, at ESCP Europe Business School. I will be giving a two hour lecture on “Big Data and Artificial Intelligence”, to use some buzzwords, as asked. More honestly, it will be on (new) data and (new) algorithms for predictive modeling. Slides are now online.

Mapping cities

a French version of this article is online at http://variances.eu/

Issue 53 of Insee Analyses Ile-de-France provides an analysis of “a social mosaic specific to Paris“, with the map in Figure 1.

Figure 1 : INSEE, Insee Analyses 53, 2017

This map is a priori familiar to many people, in the sense that we quickly recognize the city represented, we know how to quickly find different elements, and we know how to read the information presented, almost instinctively. In urban history, the way we saw, and how we represented the maps, has often been the basis of urban planning. Changing representation has made it possible to change the structure of cities. We will take up here the two major historical turning points, mentioned in Söderström (1996), based on two recent works: the representation of Rome at the beginning of the Renaissance, and the first iconographic plans, described in Maier (2015), and the “social” or “health” maps of London of Victorian civil servants, described in Vaughan (2018). In particular, the latter are the ancestors of zoning maps, which are widely used in urban planning, but also correspond to the majority of maps produced by statisticians and economists (the INSEE map is an example). And some maps from the last century have nothing to envy to the maps produced today, in the era of big data.

Rome, Leon Battista Alberti and Leonardo Bufalini, and the unchanging motives

Choay (1980) emphasizes the fundamental role in the history of urban planning of Alberti’s De Re Aedificatoria (presented in manuscript form to Pope Nicholas V in 1452, but published only in 1485). The Alberti Treaty is indeed the first text to consider construction (Alberti prefers the term “construction” – ædificatoria – to cover both architecture and urban planning) in terms of an autonomous domain to which the rational method must be applied. The history of representation sees a turning point with the Renaissance, with figurative forms to represent urban space. We will leave the medieval aesthetic with the rediscovery of perspective, which will produce a rationalization of what can be seen, even if it often induces a partial vision of the object. In his treatise, Leon Battista Alberti proposes a scientific method governing the art of building the house, but also the entire city. But it is in Descriptio urbis Romae, probably written at the same time, that he deepened the idea of urban planning, taking the particular example of Rome.

In his book, Alberti does not propose any map of Rome, but a list of instructions to be followed to create one, with the coordinate tables of several important elements of the city, natural, but also artificial. The list includes the ramparts, the river (the Tiber), the city gates, more than thirty public buildings, including the Capitol, which for Alberti is the reference point of the urban plan. He proposed to represent the city by using a disc divided into 48 portions, and by using the distance to the Capitol (in addition to a compass) to place any building. All calculations are detailed in Ludi Matematici Descriptio, using triangulation techniques. In 1450, Alberti invented the geometric plan, corresponding to what we would today call the plan of a city, even if the circular shape may surprise at first sight (see Figure 2), and does not correspond to the ichnographic plan that we all use today (obtained by horizontal and geometrical projection on a plan).

Figure 2 : reconstruction of Alberti’s map in Descriptio urbis Romae, by Luigi Vagnetti in Lo studio di Roma negli scritti albertiani (1974). Source: Maier (2015).

Its plan corresponds to the emergence of a new mode of representation, very geometric. But it was not until Leonardo Bufalini’s plan in 1531 that the first ichnographic plan arrived (it would be unfair to forget Imola’s plan drawn in 1503 by Leonardo da Vinci). If Alberti’s plan indicated the coordinates of a building, Bufalini decided to incorporate the ground plan of the buildings into his city plan.

Figure 3 : carte de Bufalini, Roma, 1551, British Library Londres. Source : Maier (2015).

But if Alberti’s plan has had such an impact, it is also because it came at the time when Pope Nicholas V launched a plan to rebuild Rome, covering an entire district, from Castel Sant’Angelo to the Vatican. This is probably the first urban planning on this scale, proposing to use the urban form as an instrument of social engineering. Alberti’s representation helped this project, with a scientific vision of the map, no longer depending on the artist’s artistic skills, or to inscribe the map in a story that would give it meaning. This urban map is self-sufficient, containing the terms of its own meaning. In Latour’s (1989) terminology, these representations that can be detached from the place (or object) they represent, “while remaining immutable so that they can be moved in all directions without further distortion, loss or corruption” correspond to immutable motives. Alberti’s map is one of the first examples of these immutable mobiles. It juxtaposes the natural and the human construction, the profane and the sacred, placing measurement and position as the only values.

These plans see the urban space as a whole, not offering a single point of view, such as Jacopo Filippo Foresti’s more classic maps (for the time), for example (see Figure 4). It is possible to take Foresti’s point of view to see his map. Alberti’s map exists only as an abstract object.

Figure 4 : view of Rome by Jacopo Filippo Foresti, 1490. Source: Maier (2015).

If Leonardo Bufalini’s map revolutionized urban mapping, and if the iconographic plan is the dominant representation today, these maps have long remained marginal, because they were exclusively reserved for administrative, military or administrative purposes. The map of Foresti has not completely disappeared: it can be found in tourist maps, for example, which are not very concerned about proportions, simply seeking to stage monuments or to indicate itineraries. We then contrast an often local, horizontal vision (on a human scale) with a vision sometimes called zenithal which proposes to conceive objects in abstract terms. It is the latter that makes it possible to represent the city in the form of different neighbourhoods, with different levels of wealth for example, resulting in geometric plans for social statistics in Victorian times, making it possible to be the subject of census, measurement and comparison.

Also noteworthy is the 1748 map of Rome created by Giambattista Nolli. Previously, Leonardo Bufalini proposed to take the point of view of an eagle, flying over the city. Nolli established the now common practice of representing entire cities from above without a single focal point, each block being considered as if the cartographer were directly above it.

Figure 5 : Giambattista Nolli’s map of Rome, 1748. Source: Sylvain Mottet.

London, Thomas More and Charles Booth, and the zoning maps

At the end of the 19th century (from 1870 onwards) Germany saw the first “social maps”, born in the context of an increasingly dense urban population, high social tensions and deteriorating health conditions. German planners proposed a vision of the innovative city as a living organism that needed to be made to function more efficiently. In 1876, Reinhard Baumeister in Stadterweiterungen in technischer, baupolizeilicher und wirtschaftlicher Beziehung and especially Josef Stübben in Der Städtebau, in 1890, proposed the first urban planning manuals. Thus, towards the end of the first chapter, Baumeister proposes to use an urban expansion plan, a master plan to organize the future urban space. For him, it was a question of ensuring the stability and proper functioning of a city designed as a living organism to deal with the problems it faces: overcrowding in certain districts, traffic and hygiene problems, social unrest, etc. To do this, he suggests specializing the city’s sectors in functional and social terms – what we will later call a “zoning plan” (or Bauzonenplan) – and ensuring the sustainability of this specialization. However, he warns against an overly rigid and inflexible master plan: urban development cannot be planned with too much precision, and it is therefore counterproductive to want to freeze it in a totally predetermined framework. Its plan aims to provide general guidelines necessary for the cohesion of the urban organization. In particular, he notes that the more guidelines there are, the more they will have to be the subject of local plans with a limited time horizon.

While the zoning plan was not originally conceived as part of the management plan, it quickly became the key document, its clearest and most effective part. The objective was to understand, at a glance, the whole city as part of an administrative project. It is not only a question of having an overall vision of the city (which the iconographic plan already allowed) but also of using colour codes that facilitate the total regulation of this city. In particular, this zoning plan made it possible to predict several years or even decades in advance what the morphological and functional characteristics of a given area would be. In particular, it allowed investors to anticipate the future of an area and guarantee a certain return on their investments.

This vision proposed by Baumeister thus made it possible to see better, for example, that the most bourgeois areas were often located in the west of the cities. This position is simply because these areas are often healthier from a health point of view: the smoke and smog produced by cities are dispersed in the upper layers of the atmosphere, and when the wind comes from the west (which happens most often in most European cities) the smoke and smog are transported eastwards and towards the lower layers of the atmosphere. From this observation, it becomes natural to build factories in the east and houses in the west. Baumeister’s work was not only theoretical: he worked on the development of the city of Frankfurt in 1891, then Berlin, Cologne, Essen, etc. In Frankfurt, he thus proposed the idea of concentric zones, which was later taken up by many economists. Figure 6 shows this form of a city, in an article published in 1925 by Ernest Burgess (who would later become one of the founders of the Chicago school). At the beginning of the First World War, all German cities had a zoning plan. And in the following years, it was the United States that adopted the concept, with New York in 1916, and more than 500 cities in 1926. In that year, zoning was officially institutionalized, with the approval of the Supreme Court. In 1933, it was the Athens Charter that recognized zoning as the main and central task of urban planning.

Figure 6 : the concentric city, Burgess (1925). Source : Vaugha (2018)

But in parallel with German development, where civil servants imagine the instruments of contemporary urban planning, social planning in England takes place in a context of strong social tensions. The impoverishment of a large part of the population, the many very precarious housing units, the disastrous sanitary conditions and the increase in crime in large cities have made urban development management an extremely sensitive and political subject. It is not surprising to see Patrick Geddes’ work published in Edinburgh, a biologist by training (the city is seen as a living organism) and an anarchist activist, he thought of the image and cartography as a central tool in the fight against poverty. He developed and advocated the use of statistics and mapping in land use planning and urban development, probably more than anyone else at that time. But history will remember Charles Booth’s work in London from 1886 onwards.

Charles Booth, who began as a merchant and shipowner, devoted himself fully to the first social surveys at the end of the 19th century, based on a precise taxonomy of social categories. He was the first to produce social maps covering the entire urban space. His investigations focused first on the East End, London’s most deprived neighbourhood, before spreading throughout the city over more than 17 years. Its objective was to provide a scientific study of the living conditions of the London population in order to put an end to the images of deprived neighbourhoods. As he said in 1902, his objective was to establish “the numerical relation which poverty, misery and depravity bear to regular earnings and comparative comfort, and to describe the general conditions under which each class lives”.

Booth’s approach was based on the creation of a statistical classification of social categories, ranging from A (the lower class) to H (the upper middle class). It has therefore created, on the basis of the notes taken in the field by the inspectors, a taxonomy that distinguishes the different sectors of the social spectrum. He estimated the number of “poor” (classes A-D) at 300,000 people in the East End and 1,300,000 for the city as a whole, almost a third of the total population at the time. The impact of the figures on the public was enormous and was reinforced by the poverty maps that were included in the results volumes dealing first with the East End and then, a few years later, with the entire city, as illustrated in Figure 7.

Figure 7 : Charles Booth Map Descriptive of London Poverty, in 1898. Source : Vaughan (2018). See also https://booth.lse.ac.uk/map/

The map makes it possible to move from a social logic to a spatial logic: a particular class is translated into cartographic terms, becomes a building, a block of houses, a street, an entire urban area. The social map therefore made it possible to think of the city in terms of homogeneous spatial units. This reasoning is essential for urban planning: it could not develop in the context of the complexity of the discourse, distinguishing between the different inhabitants of the same building. This social vision of mapping, with its focus on slums and poor neighbourhoods, should be brought closer to a health objective.

That said, thinking about urban development in terms of health interventions to heal society from its ills is not new. In 1516, Thomas More founded one of the main forms of urban planning theory, starting with a diagnosis of the disease and then proposing a definitive solution through a total restructuring of the urban form. During the 18th century, the translation of this principle consisted in isolating particular intervention areas (characterized by their insalubrity) and removing them, sweeping away the urban past. The solution adopted at the end of the 19th century was rather to work from what already existed, and to find the most effective solutions to manage the probable future changes in the urban context.

At the end of the 19th century, we also moved from “descriptive statistics” to “prescriptive statistics”, to use Ogien’s terms (2013). We no longer simply evaluate the number of smallpox patients, we begin to make the choice to vaccinate (or not) a specific population, and therefore to set up a mandatory preventive intervention (at the time the vaccine still killed about 1 person out of 300).

The homme moyen (average man) by Adolphe Quetelet will launch moral statistics, with the search for people becoming the norm, the average. Diseases are also beginning to be linked to population density, poor ventilation and humidity. “Dirty, unhealthy, infectious, corrupt or simply stinking are the categories that make it possible to think what we now call pollution” in the words of Fureix and Jarrige (2015). We then move from the social map to the “moral map”, a city thought up by hygienists. Moral geography, which until then had been the subject of partial and unsystematized observations, finds in the map a (graphical) space that synthesizes and organizes it. The social map gave the globalizing vision necessary for the existence of urban planning, and for the precise location of the sites necessary for the targeted and rational functioning of its therapeutic action. In mind is Dr. John Snow’s 1854 map of the cholera epidemic, presented (and updated) in Figure 8. At the time, the dominant theory was the theory of miasmas, claiming that diseases such as plague or cholera spread in the form of bad air. In 1854, with the help of the Reverend Henry Whitehead, by interviewing local residents, he established the geographical distribution of cases, and identified the source of the epidemic: a public water pump on Broad Street. While microbial research has not scientifically established the danger of the water pump, the mapping study of the spread of the epidemic has been sufficient to convince the authorities to close it.

Figure 8 : John Snow On the Mode of Communication of Cholera, in 1855. Source :  https://tabsoft.co/2y82nbf

However, as Vaughan (2018) points out, similar works can be found throughout England at the same time, such as Edwin Chadwick’s Sanitary Map of the Town of Leeds, shown in Figure 9. On this map, Chadwick identifies two groups of dwellings: working class houses and shops, workhouses and artisans’ houses. Colour dots, indicating contagious diseases, only seem to proliferate in poor neighbourhoods. In particular, the map noted that the patients did not live in contiguous areas, but that they are scattered around the map, while being located in poor neighbourhoods.

Figure 9 : Edwin Chadwick, Sanitary Map of the Town of Leeds, 1842. Source : Vaughan (2018) et https://bit.ly/2zL3pM8

The maps had considerable public health impacts, and the zoning, formalized by Charles Booth, was the basis for spatial statistics, as it developed throughout the 20th century.

If the cartography of the city is now complex and rich, it should be noted that economists have taken a long time to leave the “linear city” model, introduced in Hotelling (1929), which has been refined over time, as shown in Figure 10, pitting the residential part (RD – residential district) against the business centre (BD – business district). But that’s another story….

Figure 10 : the different forms of the linear city. Source : Fujita & Thisse (1997).

References:

Booth, Charles (1902) Life and Labour in London. 17 volumes.

Burgess, Ernest (1925). The Growth of the City:An Introduction to a  Research Project.

Choay, Françoise (1980). La règle et le modèle, Paris, Seuil.

Fujita, Masahisa et Thisse, Jacques-Francois. (1997), Économie géographique, Problèmes anciens et nouvelles perspectives. Annales d’Économie et de Statistique, 45, 37-87.

Fureix, Emmanuel et Jarrige, François. (2015), La modernité désenchantée : relire l’histoire du XIXe siècle français, Paris, La Découverte.

Hotelling, Harold (1929). Stability in Competition. The Economic Journal, 39, 41-57.

Latour, Bruno (1989). La science en action. Paris, La Découverte.

Maier, Jessica (2015). Rome, measured and imagined. The University of Chicago Press.

Ogien, Albert (2013). Désacraliser le chiffre dans l’évaluation du secteur public, Versailles, Éditions Quæ,

Söderström, Ola (1996) Paper cities : visual thinking in urban planning. Ecumene, 3, 249-281.

Vaughan, Laura (2018) Mapping Society: The Spatial Dimensions of Social Cartography. UCL Press.

Variable explicative dans un intervalle

Suite a une question posée ce matin en cours, je vais faire un rapide billet pour expliquer comment extraire les valeurs inférieures et supérieures quand on a des intervalles, sous R. Commençons par générer des données,

n=200
set.seed(123)
X=rnorm(n)
Y=2+X+rnorm(n,sd = .3)

Supposons maintenant que l’on n’observe plus la vraie variable x mais juste une classe (on va créer huit classes, avec chacune un huitième des observations)

Q=quantile(x = X,(0:8)/8)
Q[1]=Q[1]-.00001
Xcut=cut(X,breaks = Q)
B=data.frame(Y=Y,X=Xcut)

Par exemple, pour la premiere valeur, on a

as.character(Xcut[1])
[1] "(-0.626,-0.348]"

Pour extraire des informations sur ces bornes, on peut utiliser le petit code suivant, qui renvoie la borne inférieure, la borne supérieur, et le milieu de l’intervalle

extraire = function(x){
  ax=as.character(x)
  lower1 = as.numeric( sub("\\((.+),.*", "\\1", ax) )
  lower2 = as.numeric( sub("\\[(.+),.*", "\\1", ax) )
  upper1 = as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", ax) )
  upper2 = as.numeric( sub("[^,]*,([^]]*)\\)", "\\1", ax) )
  lower = c(lower1,lower2)
  lower=lower[!is.na(lower)]
  upper = c(upper1,upper2)
  upper=upper[!is.na(upper)]
  mid   = (lower+upper)/2
  return(c(lower=lower,mid=mid,upper=upper))
}

On peut vérifier sur notre première observation

extraire(Xcut[1])
 lower    mid  upper 
-0.626 -0.487 -0.348

Juste pour voir, on peut créer trois variables supplémentaires dans notre base (avec ces trois informations)

B2=Vectorize(function(i) extraire(Xcut[i]))(1:n)
B$lower=B2[1,]
B$mid  =B2[2,]
B$upper=B2[3,]

et on peut comparer 4 régressions (i) on régresse sur nos 8 classes, i.e. nos 8 facteurs (ii) on régresse sur la borne inférieure de l’intervalle, (iii) sur la valeur “moyenne” de l’intervalle (iv) sur la borne supérieure

regF=lm(Y~X,data=B)
regL=lm(Y~lower,data=B)
regM=lm(Y~mid,data=B)
regU=lm(Y~upper,data=B)

On peut comparer les prévisions avec nos quatre modeles

plot(B$Y,predict(regF),ylim=c(0,4))
points(B$Y,predict(regM),col="red")
points(B$Y,predict(regU),col="blue")
points(B$Y,predict(regL),col="purple")
abline(a=0,b=1,lty=2)

Pour aller plus loin, on peut aussi comparer les AIC de nos modèles,

AIC(regF)
[1] 204.5653
AIC(regM)
[1] 201.1201
AIC(regL)
[1] 266.5246
AIC(regU)
[1] 255.0687

Si l’utilisation des bornes inférieures et supérieures n’est pas concluante, ici, on notera qu’utiliser la valeur moyenne de l’intervalle donne des résultats un peu meilleurs que d’utiliser 8 facteurs.