Après la loi de Bernoulli, la loi binomiale et la loi multinomiale, on attaque la régression logistique. Les slides sont en ligne (slides 2) et la vidéo aussi (slides 2)
Tag Archives: GLM
Slides 1 – loi binomiale
Allez, on reprend les cours, après un peu plus de deux semaines d’interruption… Mais je reprends tranquillement, en revenant sur des rappels que j’avais fait au premier cours, et dont on aura finalement besoin très bientôt ! On revient sur la loi de Bernoulli, la loi binomiale et la loi multinomiale. Et on parlera inférence, et tests. Les slides sont en ligne (slides 1) et la vidéo aussi (slides 1)
Il y a une coquille dans les slides (7 et 8) mais je n’ai pas interrompu la vidéo et j’essayais de continuer comme si de rien n’était. Pour la proportion \widehat{p}, on a juste la loi asymptotique Gaussienne (ou une transformation de la loi Binomiale à distance finie, mais pas de loi de Student, désolé pour le copier-coller depuis des vieux slides qui parlaient d’estimation de moyenne \overline{x} pour des observations supposées Gaussiennes)
J’essaye de limiter le nombre d’erreurs, mais ces cours à distance me prennent beaucoup de temps, et je n’ai pas le courage pour refaire les vidéos, pour l’instant. Toutes mes excuses.
Reprise des cours (à venir)
Lundi, les cours de la session d’hiver reprennent… à distance. Je vais mettre en ligne une série de capsules vidéos pour finir le cours, sur les GLM. J’ai mis en ligne une première vidéo (slides 0) pour annoncer le plan. J’ai fait des slides rapidement (ça changera des cours que je faisais au tableau) et je fais des enregistrement unique, sans montage, histoire de mettre en ligne rapidement le cours en ligne. Le pdf des slides est aussi en ligne (slides 0). J’essayerais de mettre les vidéos en ligne au fur et à mesure. Je ne suis pas particulièrement fier de moi, mais quitte à perdre du temps à faire le guignol devant la caméra, autant que ça serve au plus grand nombre.
Ah oui, il y a probablement des coquilles, voire des erreurs dans les slides… les commentaires sont ouverts pour me faire part de toute suggestion !
De la qualité d’un classifieur
On va profiter de la quarantaine pour mettre en ligne un billet sur la courbe ROC, la receiver operating characteristic. Considérons une petite base de données avec n=10 observations, deux variables continues, x_1 et x_2, et la variable d’intérêt binaire y\in\{0,1\}. On peut représenter les points dans le plan (x_1,x_2), et on utilise une couleur différente pour y\in\{0,1\}.
x1 = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) x2 = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) y = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x1,x2=x2,y=as.factor(y)) plot(x1,x2,col=c("red","blue")[1+y],pch=19,cex=1.5) |
On peut alors faire une régression logistique, de telle sorte que \mathbb{P}(Y=1|x_1,x_2)=\frac{e^{\beta_0+\beta_1x_1+\beta_2x_2}}{1+e^{\beta_0+\beta_1x_1+\beta_2x_2}}On peut visualiser l’ensemble des points (x_1,x_2) pour lesquels \mathbb{P}(Y=0|x_1,x_2)=\mathbb{P}(Y=1|x_1,x_2)(on a alors autant de chance d’être rouge et bleu) soit \beta_0+\beta_1x_1+\beta_2x_2=0 qui correspond à une droite,
reg = glm(y~x1+x2,data=df,family=binomial(link = "logit")) b = coefficients(reg) abline(a=-b[1]/b[3],b=-b[2]/b[3]) |
On peut alors représenter y_i en fonction du score, i.e. l’estimation de \mathbb{P}(Y=1|x_{1,i},x_{2,i}),
Y = df$y S = predict(reg,type="response") plot(S,y,xlab="probabilité prédite",ylab="y") |
On va alors se donner un seuil (par exemple 50\%) : si la probabilité que Y prenne la valeur 1 excède le seuil, on prédit 1 (et sinon 0). Sur la figure ci-dessus, on a alors 4 sortes de points : ceux à gauche du seuil (et qui sont prédits 0), qui sont bien classés s’ils sont en bas, et mal classés en haut; à droite du seuil (et qui sont prédits 1), ils sont bien classés s’ils sont en haut, et mal classés en bas [dans le code ci-dessous, le symbole > désigne l’opérateur “supérieur“, qui malheureusement ne passe pas dans cet éditeur]
seuil = .5 Yhat = (S>seuil)*1 plot(S,y,xlab="probabilité prédite",ylab="y",pch=19, col=c("red","blue")[1+(y==Yhat)]) abline(v=seuil,lty=2) |
Les couleurs reflètent le bon ou mauvais classement : les points rouges correspondent à des erreurs de classement. On peut retrouver tout ça dans le tableau de contingence ci-dessous, qui correspond au tableau standard d’un test d’hypothèse
table(Yhat,Y) Y Yhat 0 1 0 3 1 1 1 5 |
Ce qui va nous intéresser ici à deux grandeurs particulières : le taux de faux positifs et le taux de vrais positifs,
FP=sum((Ps==1)*(Y==0))/sum(Y==0) TP=sum((Ps==1)*(Y==1))/sum(Y==1) |
On a obtenu ce tableau à un seuil donné (ici 50\%) mais on peut regarder ce qui se passe lors que le seuil change, comme sur l’animation ci-dessous, où on trace, à droite, le taux de vrais positifs (sur l’axe y) en fonction du taux de faux positifs (sur l’axe x
L’ensemble des points donne la courbe ROC.
roc.curve=function(s,print=FALSE){ Ps=(S>s)*1 FP=sum((Ps==1)*(Y==0))/sum(Y==0) TP=sum((Ps==1)*(Y==1))/sum(Y==1) if(print==TRUE){ print(table(Observed=Y,Predicted=Ps)) } vect=c(FP,TP) names(vect)=c("FPR","TPR") return(vect)} u = seq(0,1,length=251) V = Vectorize(roc.curve)(u) plot(t(V),type="s",xlab="Faux Positifs",ylab="Vrais Positifs") segments(0,0,1,1,col="light blue") |
On peut vérifier que le point qu’on avait obtenu avec un seuil de 50\% est bien sur la courbe
table(Yhat,Y) Y Yhat 0 1 0 3 1 1 1 5 (FP = sum((Yhat)*(Y==0))/sum(Y==0)) [1] 0.25 (TP = sum((Yhat==1)*(Y==1))/sum(Y==1)) [1] 0.8333333 abline(v=FP,lty=2,col="blue") abline(h=TP,lty=2,col="blue") points(FP,TP,pch=19,cex=1.5) |
Bien entendu, il y a (beaucoup) de packages R qui permettent d’avoir cette courbe,
library(ROCR) pred = prediction(S,Y) plot(performance(pred,"tpr","fpr")) |
Une grandeur intéressante est appelée aire sous la courbe (ou AUC) qu’on peut calculer ici à la main (on a une simple fonction en escalier)
p1 = roc.curve(1/3) p2 = roc.curve(.7) p2[1]*p2[2]+(p1[1]-p2[1])*p1[2]+(1-p1[1]) [1] 0.875 |
mais qu’on peut avoir automatiquement
auc.perf = performance(pred, measure = "auc") auc.perf@y.values[[1]] [1] 0.875 |
Allez, tentons un autre classifieur : toujours une régression logistique, mais sur un facteur obtenu en coupant la seconde variable en deux, \boldsymbol{1}_{[s,\infty)}(x_2)
reg = glm(y~I(x2>.525),data=df,family=binomial(link = "logit")) abline(h=.525) |
La droite horizontale n’est plus la droite qui donne autant de chance d’être rouge que bleu, mais qui coupe la variable x_2. Ici, on prédit juste deux valeurs : 40\% de chance d’être bleu en bas, et 80\% de chance d’être bleu, en haut. Si on représente observations y_i en fonction des probabilités prédites, on obtient
Y = df$y S = predict(reg,type="response") plot(S,y,xlab="probabilité prédite",ylab="y",xlim=0:1) |
Avec un seuil à 50\%, on obtient le tableau de contingence suivant (avec 3 erreurs, contre 2 auparavant)
seuil = .5 Yhat = (S>seuil)*1 table(Yhat,Y) Y Yhat 0 1 0 3 2 1 1 4 |
Si on trace la courbe ROC, on obtient
roc.curve=function(s,print=FALSE){ Ps=(S>s)*1 FP=sum((Ps==1)*(Y==0))/sum(Y==0) TP=sum((Ps==1)*(Y==1))/sum(Y==1) if(print==TRUE){ print(table(Observed=Y,Predicted=Ps)) } vect=c(FP,TP) names(vect)=c("FPR","TPR") return(vect)} u = seq(0,1,length=251) V = Vectorize(roc.curve)(u) plot(t(V),type="l",xlab="Faux Positifs",ylab="Vrais Positifs") segments(0,0,1,1,col="light blue") |
Cette fois, la courbe n’est plus constante par morceaux, mais linéaire par morceaux, et continue… L’interprétation est un peu plus subtile: cette fois, on a deux régions de l’espace, et dans chaque région, on ne sait pas trop comment distinguer (la probabilité est plate partout ici, contraitement à la régression précédante). Autrement dit, dans cette région, on a une proba constante, par exemple 40\% (en bas) : quand on doit prévoir pour un individu dans cette région, on lui attribue les valeurs \{0,1\} respectivement avec les probabilités \{40\%,60\%\}. Quand on a une probabilité constante, on parle de classifieur aléatoire… La diagonale bleue sur la figure ci-dessus est justement un classifieur aléatoire… C’est ce qu’on obtient si on prédit au hasard
pred = prediction(S,Y) plot(performance(pred,"tpr","fpr")) |
Le point est obtenu avec un seuil de 50\% (ou en fait, n’importe quelle valeur entre 40\% et 80\%). On peut là encore calculer l’aire sous la courbe, cette fois à l’aide de trapèzes (ou de triangles)
p1 = roc.curve(.5) p2[1]*p2[2]/2+(1-p1[1])*p1[2]+(1-p1[1])*(1-p1[2])/2 [1] 0.7083333 auc.perf = performance(pred, measure = "auc") auc.perf@y.values[[1]] [1] 0.7083333 |
Probabilistic Foundations of Econometrics, part 3
This post is the third one of our series on the history and foundations of econometric and machine learning models. Part 2 is online here.
Exponential family and linear models
The Gaussian linear model is a special case of a large family of linear models, obtained when the conditional distribution of Y (given the covariates) belongs to the exponential family f(y_i|\theta_i,\phi)=\exp\left(\frac{y_i\theta_i-b(\theta_i)}{a(\phi)}+c(y_i,\phi)\right) with \theta_i=\psi(\mathbf{x}_i^T \beta). Functions a, b and c are specified according to the type of exponential law (studied extensively in statistics since Darmoix (1935), as Brown (1986) reminds us), and \psi is a one-to-one mapping that the user must specify. Log-likelihood then has a simple expression \log\mathcal{L}(\mathbf{\theta},\phi|\mathbf{y}) =\frac{\sum_{i=1}^ny_i\theta_i-\sum_{i=1}^nb(\theta_i)}{a(\phi)}+\sum_{i=1}^n c(y_i,\phi) and the first order condition is then written \frac{\partial \log \mathcal{L}(\mathbf{\theta},\phi|\mathbf{y})}{\partial \mathbf{\beta}} = \mathbf{X}^T\mathbf{W}^{-1}[\mathbf{y}-\widehat{\mathbf{y}}]=\mathbf{0} based on Müller’s (2011) notations, where \mathbf{W} is a weight matrix (which depends on \beta). Given the link between \theta and the expectation of Y, instead of specifying the function \psi(\cdot) , we will tend to specify the link function g(\cdot) defined by \widehat{y}=m(\mathbf{x})=\mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=g^{-1} (\mathbf{x}^T \beta) For the Gaussian linear regression we consider an identity link, while for the Poisson regression, the natural link (called canonical) is the logarithmic link. Here, as \mathbf{W} depends on \beta (with \mathbf{W}=diag(\nabla g(\widehat{\mathbf{y}})Var[\mathbf{y}]) there is generally no explicit formula for the maximum likelihood estimator. But an iterative algorithm makes it possible to obtain a numerical approximation. By setting \mathbf{z}=g(\widehat{\mathbf{y}})+(\mathbf{y}-\widehat{\mathbf{y}})\cdot\nabla g(\widehat{\mathbf{y}}) corresponding to the error term of a Taylor development in order 1 of g, we obtain an algorithm of the form\widehat{\beta}_{k+1}=[\mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{X}]^{-1} \mathbf{X}^T \mathbf{W}_k^{-1} \mathbf{z}_kBy iterating, we will define \widehat{\beta}=\widehat{\beta}_{\infty}, and we can show that – with some additional technical assumptions (detailed in Müller (2011)) – this estimator is asymptotically Gaussian, with \sqrt{n}(\widehat{\beta} -\beta)\overset{\mathcal{L}}{\rightarrow} \mathcal{N}(\mathbf{0},I(β)^{-1}) where numerically I(\beta)=\varphi\cdot[\mathbf{X}^T \mathbf{W}_\infty^{-1} \mathbf{X}] .
From a numerical point of view, the computer will solve the first-order condition, and actually, the law of Y does not really intervene. For example, one can estimate a “Poisson regression” even when observations are not integers (but they need to be positive). In other words, the law of Y is only an interpretation here, and the algorithm could be introduced in a different way (as we will see later on), without necessarily having an underlying probabilistic model.
Logistic Regression
Logistic regression is the generalized linear model obtained with a Bernoulli’s law, and a link function which is the quantile function of a logistic law (which corresponds to the canonical link in the sense of the exponential family). Taking into account the form of Bernoulli’s law, econometrics proposes a model for y_i\in\{0,1\}, in which the logarithm of the odds follows a linear model: \log\left(\frac{\mathbb{P}[Y=1\vert \mathbf{X}=\mathbf{x}]}{\mathbb{P}[Y\neq 1\vert \mathbf{X}=\mathbf{x}]}\right)=\beta_0+\mathbf{x}^T\beta or \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\frac{e^{\beta_0+\mathbf{x}^T\beta}}{1+ e^{\beta_0+\mathbf{x}^T\beta}}=H(\beta_0+\mathbf{x}^T\beta) where H(\cdot)=\exp(\cdot)/(1+exp(\cdot)) is the cumulative distribution function of the logistic law. The estimation of (\beta_0,\beta) is performed by maximizing the likelihood: \mathcal{L}=\prod_{i=1}^n \left(\frac{e^{\mathbf{x}_i^T\mathbf{\beta}}}{1+e^{\boldsymbol{x}_i^T\mathbf{\beta}}}\right)^{y_i}\left(\frac{1}{1+e^{\mathbf{x}_i^T\mathbf{\beta}}}\right)^{1-y_i} It is said to be a linear models because isoprobability curves here are the parallel hyperplanes b+\mathbf{x}^T\beta . Rather than this model, popularized by Berkson (1944), some will prefer the probit model (see Berkson, 1951), introduced by Bliss (1934). In this model: \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=\Phi (\beta_0+\mathbf{x}^T\beta)
where \Phi denotes the distribution function of the reduced centred normal distribution. This model has the advantage of having a direct link with the Gaussian linear model, since y_i=\mathbf{1}(y_i^\star>0) with y_i^\star=\beta_0+\mathbf{x}^T \beta+\varepsilon_i where the residuals are Gaussian, \mathcal{N}(0,\sigma^2). An alternative is to have centered residuals of unit variance, and to consider a latent modeling of the form y_i=\mathbf{1}(y_i^\star>\xi) (where \xi will be fixed). As we can see, these techniques are fundamentally linked to an underlying stochastic model. In the body of the article, we present several alternative techniques – from the learning literature – for this classification problem (with two classes, here 0 and 1).
Regression in high dimension
As we mentioned earlier, the first order condition \mathbf{X}^T (\mathbf{X}\widehat{\beta}-\mathbf{y})=\mathbf{0} is solved numerically by performing a QR decomposition, at a cost which consists in O(np^2) operations (where p is the rank of \mathbf{X}^T \mathbf{X}). Numerically, this calculation can be long (either because p is large or because n is large), and a simpler strategy may be to sub-sample. Let n_s\ll n, and consider a sub-sample size n_s of \{1,\cdots,n\}. Then \widehat{\beta}_s=(\mathbf{X}_s^T \mathbf{X}_s )^{-1} \mathbf{X}_s^T\mathbf{y}_s is a good approximation of \beta as shown by Dhillon et al. (2014). However, this algorithm is dangerous if some points have a high leverage (i.e. L_i=\mathbf{x}_i(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{x}_i^T). Tropp (2011) proposes to transform the data (in a linear way), but a more popular approach is to do non-uniform sub-sampling, with a probability related to the influence of observations (defined by I_i=\widehat{\varepsilon}_iL_i/(1-L_i)^2 , and which unfortunately can only be calculated once the model is estimated).
In general, we will talk about massive data when the data table of size does not fit in the RAM memory of the computer. This situation is often encountered in statistical learning nowadays with very often p\ll n. This is why, in practice, many libraries of algorithms assimilated to machine learning use iterative methods to solve the first-order condition. When the parametric model to be calibrated is indeed convex and semi-differentiable, it is possible to use, for example, the stochastic gradient descent method as suggested by Bottou (2010). This last one allows to free oneself at each iteration from the calculation of the gradient on each observation of our learning base. Rather than making an average descent at each iteration, we start by drawing (without replacement) an observation \mathbf{x}_i among the n available. The model parameters are then corrected so that the prediction made from \mathbf{x}_i is as close as possible to the true value y_i. The method is then repeated until all the data have been reviewed. In this algorithm there is therefore as much iteration as there are observations. Unlike the gradient descent algorithm (or Newton’s method) at each iteration, only one gradient vector is calculated (and no longer n). However, it is sometimes necessary to run this algorithm several times to increase the convergence of the model parameters. If the objective is, for example, to minimize a loss function \ell between the estimator m_\beta (\mathbf{x}) and y (like the quadratic loss function, as in the Gaussian linear regression) the algorithm can be summarized as follows:
- Step 0: Mix the data
- Iteration step: For t=1,\cdots, n, we pull i\in\{1,\cdots,n\} without replacement, and we set \beta^{t+1} = \beta^{t} - \gamma_t\frac{ \partial{\ell(y_i,m_{\beta^t}(X_i)) } }{ \partial{ \beta}}
This algorithm can be repeated several times as a whole depending on the user’s needs. The advantage of this method is that at each iteration, it is not necessary to calculate the gradient on all observations (more sum). It is therefore suitable for large databases. This algorithm is based on a convergence in probability towards a neighborhood of the optimum (and not the optimum itself).
(references will be given in the very last post of that series) To be continued…
Exotic link functions for GLMs
In my previous post on GLMs, I discussed power link functions. But there are much more links that can be used :
- The square root link (for the Poisson model)
Consider some random variable Y with mean \mu and variance \sigma^2. Using Taylor’s expansion,g(Y)\sim g(\mu)+(Y-\mu)g'(\mu)+\frac{1}{2}(Y-\mu)^2g''(\mu)we can write\mathbb{E}[g(Y)]\sim g(\mu)+\frac{\sigma^2}{2}g''(\mu) \text{Var}[g(Y)]\sim [g'(\mu)]^2\sigma^2
Assume that Y\sim\mathcal{P}(\lambda), a consider a square root transformation, g(y)=\sqrt{y}, then the second equality becomes \text{Var}[\sqrt{Y}]\sim \left[\frac{1}{2\sqrt{\mathbb{E}[Y]}}\right]^2\text{Var}[Y]=\frac{1}{4}
So, somehow, with a square-root transformation, we have variance stability, which might be interpreted as some homoscedasticity.
- The complementary log-log function for the Bernoulli model
Assume that the true variable of interest is a Poisson one, N|\mathbf{X}=\mathbf{x}\sim\mathcal{P}(\lambda_{\mathbf{x}}) where \lambda_{\mathbf{x}}=\exp[\mathbf{x}^T\mathbf{\beta}]Thus,\mathbb{P}[N=0|\mathbf{X}=\mathbf{x}]=\exp[-\lambda_{\mathbf{x}}]=\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]while\mathbb{P}[N>0|\mathbf{X}=\mathbf{x}]=1-\exp[-(\exp[\mathbf{x}^T\mathbf{\beta}])]=H(\mathbf{x}^T\mathbf{\beta})where H(s)=1-\exp[-\exp(s)]. Let Y=\mathbf{1}(N>0). The previous model seems like a Bernoulli regression with H as link function,\mathbb{P}[Y=1|\mathbf{X}=\mathbf{x}]=H(\mathbf{x}^T\mathbf{\beta})
So, assume now that instead of observing N we observe Y=\boldsymbol{1}(N>0). In that case, running a Bernoulli regression with a complementary log-log link function would be the same (?) as running first a Poisson regression on the original data, and then use it on our binary variable, zero vs. non-zero. Let us generate some data, and see what’s going on. Let us compare e^{\lambda_{\mathbf{x}}} and p_{\mathbf{x}} obtained from a standard logistic regression
n=563 set.seed(1) base=data.frame(X1=rnorm(n),X2=rnorm(n)) lambda=base$X1+base$X2 base$Y=rpois(n,exp(lambda)) regPois = glm(Y~.,data=base,family=poisson(link="log")) lambda = predict(regPois,type="response") regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit")) prob = predict(regBinom, type="response") plot(prob,exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") |
What if p_{\mathbf{x}} was obtained from a Bernoulli regression, with a cloglog link function ?
regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog")) prob = predict(regBinom, type="response") plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") |
It looks like the fit is very good here ! Now, what if we have real data, like the dataset from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy (with 563 observations, and nine variables)
base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE) str(base) x=base$SEX base$SEX="M" base$SEX[x=="0"]="F" x=base$CHILDREN base$CHILDREN="YES" base$CHILDREN[x==0]="NO" regPois = glm(Y~.,data=base,family=poisson(link="log")) lambda = predict(regPois,type="response") regBinom = glm((Y==0)~.,data=base,family=binomial(link="probit")) prob = predict(regBinom, type="response") plot(prob,exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") |
In that case the two models are very different. But actually, so is the second one
regBinom = glm((Y>0)~.,data=base,family=binomial(link="cloglog")) prob = predict(regBinom, type="response") plot(prob,1-exp(-lambda),xlim=0:1,ylim=0:1) abline(a=0,b=1,lty=2,col="red") |
How can we interpret that ? Could it be because the Poisson model is not good ? Actually, if we run a zero-inflated model here,
library(pscl) regZIP = zeroinfl(Y ~ . | ., data = base) summary(regZIP) Count model coefficients (poisson with log link): Estimate Std. Error z value Pr(>|z|) (Intercept) -0.002274 0.048413 -0.047 0.963 X1 1.019814 0.026186 38.945 <2e-16 *** X2 1.004814 0.024172 41.570 <2e-16 *** Zero-inflation model coefficients (binomial with logit link): Estimate Std. Error z value Pr(>|z|) (Intercept) -4.90190 2.07846 -2.358 0.0184 * X1 -2.00227 0.86897 -2.304 0.0212 * X2 -0.01545 0.96121 -0.016 0.9872 --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 |
Hence, we reject here the Poisson distribution assumption, because of the inflation of zeros… It looks like the cloglog link can be used to check if the Poisson distribution is a good model, or not…
GLMs: link vs. distribution
Usually, when I give a course on GLMs, I try to insist on the fact that the link function is probably more important than the distribution. In order to illustrate, consider the following dataset, with 5 observations
x = c(1,2,3,4,5) y = c(1,2,4,2,6) base = data.frame(x,y) |
Then consider several model, with various distributions, and either an identity link (and in that case \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=\mathbf{x}^T\mathbf{\beta}) or a log link function (so that \mathbb{E}[Y|\mathbf{X}=\mathbf{x}]=e^{\mathbf{x}^T\mathbf{\beta}})
regNId = glm(y~x,family=gaussian(link="identity"),data=base) regNlog = glm(y~x,family=gaussian(link="log"),data=base) regPId = glm(y~x,family=poisson(link="identity"),data=base) regPlog = glm(y~x,family=poisson(link="log"),data=base) regGId = glm(y~x,family=Gamma(link="identity"),data=base) regGlog = glm(y~x,family=Gamma(link="log"),data=base) regIGId = glm(y~x,family=inverse.gaussian(link="identity"),data=base) regIGlog = glm(y~x,family=inverse.gaussian(link="log"),data=base |
One can also consider some Tweedie distribution, to be even more general
library(statmod) regTwId = glm(y~x,family=tweedie(var.power=1.5,link.power=1),data=base) regTwlog = glm(y~x,family=tweedie(var.power=1.5,link.power=0),data=base) |
Consider the prediction obtained in the first case, with the linear link function
library(RColorBrewer) darkcols = brewer.pal(8, "Dark2") plot(x,y,pch=19) abline(regNId,col=darkcols[1]) abline(regPId,col=darkcols[2]) abline(regGId,col=darkcols[3]) abline(regIGId,col=darkcols[4]) abline(regTwId,lty=2) |
The predictions are very very close, aren’t they ? In the case of the exponential prediction, we obtain
plot(x,y,pch=19) u=seq(.8,5.2,by=.01) lines(u,predict(regNlog,newdata=data.frame(x=u),type="response"),col=darkcols[1]) lines(u,predict(regPlog,newdata=data.frame(x=u),type="response"),col=darkcols[2]) lines(u,predict(regGlog,newdata=data.frame(x=u),type="response"),col=darkcols[3]) lines(u,predict(regIGlog,newdata=data.frame(x=u),type="response"),col=darkcols[4]) lines(u,predict(regTwlog,newdata=data.frame(x=u),type="response"),lty=2) |
We can actually look closer. For instance, in the linear case, consider the slope obtained with a Tweedie model (that will include all the parametric familes mentioned here, actually)
pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base))$coefficients[2,1:2] Vgamma = seq(-.5,3.5,by=.05) Vpente = Vectorize(pente)(Vgamma) plot(Vgamma,Vpente[1,],type="l",lwd=3,ylim=c(.965,1.03),xlab="power",ylab="slope") |
The slope here is always very very close to one ! Even more if we add a confidence interval
plot(Vgamma,Vpente[1,]) lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2) lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2) |
Heuristically, for the Gamma regression, or the Inverse Gaussian one, because the variance is a power of the prediction, if the prediction is small (here on the left), the variance should be small. So, on the left of the graph, the error should be small with a higher power for the variance function. And that’s indeed what we observe here
erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=1),data=base),newdata=data.frame(x=1),type="response")-y[x==1] Verreur = Vectorize(erreur)(Vgamma) plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(-.1,.04),xlab="power",ylab="error") abline(h=0,lty=2) |
Of course, we can do the same with the exponential models
pente=function(gamma) summary(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base))$coefficients[2,1:2] Vpente = Vectorize(pente)(Vgamma) plot(Vgamma,Vpente[1,],type="l",lwd=3) |
or, if we add the confidence bands, we obtain
plot(Vgamma,Vpente[1,],ylim=c(0,.8),type="l",lwd=3,xlab="power",ylab="slope") lines(Vgamma,Vpente[1,]+1.96*Vpente[2,],lty=2) lines(Vgamma,Vpente[1,]-1.96*Vpente[2,],lty=2) |
So here also, the “slope” is rather similar… And if we look at the error we make on the left part of the graph, we obtain
erreur=function(gamma) predict(glm(y~x,family=tweedie(var.power=gamma,link.power=0),data=base),newdata=data.frame(x=1),type="response")-y[x==1] Verreur = Vectorize(erreur)(Vgamma) plot(Vgamma,Verreur,type="l",lwd=3,ylim=c(.001,.32),xlab="power",ylab="error") |
So my point is that the distribution is usually not the most important point on GLMs, even if chapters of books on GLMs are distribution based… But as mentioned in an another post, if you consider a nonlinear transformation, like we have with GAMs, the story is more complicated…
Bailey (1963) and Poisson regression on two factors
Consider the following dataset, from A Theory of Extramarital Affairs, by Ray Fair, published in 1978 in the Journal of Political Economy, with 563 observations, and nine variables : eight covariates, and the variable of interest, the number of extramarital affairs, over a year,
base = read.table("http://freakonometrics.free.fr/baseaffairs.txt",header=TRUE) str(base) 'data.frame': 563 obs. of 9 variables: $ SEX : int 1 0 0 1 1 0 0 1 0 1 ... $ AGE : num 37 27 32 57 22 32 22 57 32 22 ... $ YEARMARRIAGE: num 10 4 15 15 0.75 1.5 0.75 15 15 1.5 ... $ CHILDREN : int 0 0 1 1 0 0 0 1 1 0 ... $ RELIGIOUS : int 3 4 1 5 2 2 2 2 4 4 ... $ EDUCATION : int 18 14 12 18 17 17 12 14 16 14 ... $ OCCUPATION : int 7 6 1 6 6 5 1 4 1 4 ... $ SATISFACTION: int 4 4 4 5 3 5 3 4 2 5 ... $ Y : int 0 0 0 0 0 0 0 0 0 0 ... |
Let us focus on two categorical covariates, related to the importance of religion, and the occupation
df=data.frame(y=base$Y, religion=as.factor(base$RELIGIOUS), occupation=as.factor(base$OCCUPATION), expo = 1) (E=xtabs(expo~religion+occupation,data=df)) occupation religion 1 2 3 4 5 6 7 1 4 1 8 4 16 9 0 2 23 3 11 17 56 36 6 3 29 1 10 12 39 25 2 4 38 7 12 21 59 44 2 5 13 1 3 10 19 19 3 (N=xtabs(y~religion+occupation,data=df)) occupation religion 1 2 3 4 5 6 7 1 4 1 13 3 13 7 0 2 1 1 13 10 25 43 10 3 15 0 12 11 34 35 1 4 24 1 3 15 11 9 10 5 6 0 0 6 11 7 0 |
The two tables above are the exposure (number of observations) and the number of extramarital affairs, here as contingency tables. Without any covariate, one can assume that N\sim\mathcal{P}(\lambda\cdot E), where \lambda would be
sum(N)/sum(E) [1] 0.6305506 |
The idea with the margin method is to assume that N_{i,j}=E_{i,j}\cdot\lambda_{i,j} where \lambda_{i,j}=A_i\cdot B_j. Bailey (1963) added two series of constraints : per row, \sum_j N_{i,j}=\sum_j E_{i,j}\cdot A_i\cdot B_j for any i and similarly, for any j \sum_i N_{i,j}=\sum_i E_{i,j}\cdot A_i\cdot B_jFrom the first series of constraints, write A_i=\frac{\sum_j N_{i,j}}{\sum_j E_{i,j}\cdot B_j} and use the second series to write B_j=\frac{\sum_i N_{i,j}}{\sum_i E_{i,j}\cdot A_i}Because we need A_i‘s to compute B_j‘s, and conversely, it is natural to consider some iterative procedure to solve it. Observe that we do not have unicity…
Consider here some starting values for A_i‘s and B_j‘s
A=rep(1,length(levels(df$religion))) B=rep(1,length(levels(df$occupation)))*sum(N)/sum(E) A [1] 1 1 1 1 1 B [1] 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 0.6305506 |
The predicted number of extramarital affairs would be \hat N_{i,j}=E_{i,j}\cdot\hat A_i\cdot \hat B_j
E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.5222025 0.6305506 5.0444050 2.5222025 10.0888099 5.6749556 0.0000000 2 14.5026643 1.8916519 6.9360568 10.7193606 35.3108348 22.6998224 3.7833037 3 18.2859680 0.6305506 6.3055062 7.5666075 24.5914742 15.7637655 1.2611012 4 23.9609236 4.4138544 7.5666075 13.2415631 37.2024867 27.7442274 1.2611012 5 8.1971581 0.6305506 1.8916519 6.3055062 11.9804618 11.9804618 1.8916519 sum(B*E[1,]) [1] 26.48313 sum(B*E[2,]) [1] 95.84369 apply(t(B*t(E)),1,sum) 1 2 3 4 5 26.48313 95.84369 74.40497 115.39076 42.87744 sum(A*E[,1]) [1] 107 sum(A*E[,2]) [1] 13 apply(A*E,2,sum) 1 2 3 4 5 6 7 107 13 44 64 189 133 13 |
From expressions above, observe that one can very easily write expressions of A_i‘s and B_j‘s as functions of B_j‘s and A_i‘s respectively
A=apply(N,1,sum)/apply(t(B*t(E)),1,sum) B=apply(N,2,sum)/apply(A*E,2,sum) |
Let it iterate one thousand times
for(i in 1:1000){ A=apply(N,1,sum)/apply(t(B*t(E)),1,sum) B=apply(N,2,sum)/apply(A*E,2,sum) } |
We obtain here
A 1 2 3 4 5 1.5404346 1.0447195 1.4825650 0.6553159 0.6634763 B 1 2 3 4 5 6 7 0.4685515 0.2629769 0.8454435 0.7245310 0.4889697 0.7770553 1.6753750 E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586111 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450811 0.3898804 12.5342484 12.8899708 28.2722423 28.8008726 4.9677044 4 11.6678702 1.2063307 6.6483904 9.9707299 18.9053460 22.4055332 2.1957997 5 4.0413463 0.1744790 1.6827951 4.8070914 6.1639760 9.7955975 3.3347148 |
That is our prediction, per category, of the number of affairs. Observe that here, sums per row are equal to observed numbers,
apply(N,1,sum) 1 2 3 4 5 41 103 108 73 30 apply(E * A%*%t(B),1,sum) 1 2 3 4 5 41 103 108 73 30 |
as well as sums per colums
apply(N,2,sum) 1 2 3 4 5 6 7 50 3 41 45 94 101 21 apply(E * A%*%t(B),2,sum) 1 2 3 4 5 6 7 50 3 41 45 94 101 21 |
Now, why should I mention that here, in the section on the Poisson regression in our course ? Because actually, this is exactly what we get if we run a Poisson regression on those two covariates
reg=glm(y~religion+occupation,data=df,family=poisson) summary(reg) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -0.32604 0.21325 -1.529 0.126285 religion2 -0.38832 0.18791 -2.066 0.038783 * religion3 -0.03829 0.18585 -0.206 0.836771 religion4 -0.85470 0.19757 -4.326 1.52e-05 *** religion5 -0.84233 0.24416 -3.450 0.000561 *** occupation2 -0.57758 0.59549 -0.970 0.332083 occupation3 0.59022 0.21349 2.765 0.005699 ** occupation4 0.43588 0.20603 2.116 0.034381 * occupation5 0.04265 0.17590 0.242 0.808399 occupation6 0.50587 0.17360 2.914 0.003569 ** occupation7 1.27415 0.26298 4.845 1.27e-06 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 |
First of all, observe that the total sum of predictions equals the total sum of observations
yp = predict(reg,type="response") sum(yp) [1] 355 sum(df$y) [1] 355 |
But actually, the predicted number of affairs, for our 35 classes, is exactly what we got using Bailey’s technique
xtabs(yp~df$religion+df$occupation) df$occupation df$religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586112 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450813 0.3898804 12.5342484 12.8899708 28.2722424 28.8008726 4.9677044 4 11.6678703 1.2063307 6.6483904 9.9707300 18.9053460 22.4055332 2.1957997 5 4.0413464 0.1744790 1.6827951 4.8070914 6.1639761 9.7955975 3.3347148 E * A%*%t(B) occupation religion 1 2 3 4 5 6 7 1 2.8870914 0.4050987 10.4188024 4.4643702 12.0516123 10.7730250 0.0000000 2 11.2586111 0.8242113 9.7157637 12.8678376 28.6068235 29.2249717 10.5017811 3 20.1450811 0.3898804 12.5342484 12.8899708 28.2722423 28.8008726 4.9677044 4 11.6678702 1.2063307 6.6483904 9.9707299 18.9053460 22.4055332 2.1957997 5 4.0413463 0.1744790 1.6827951 4.8070914 6.1639760 9.7955975 3.3347148 |
To be more specific, up to a multiplicate constant, series of coefficients are equal here, e.g. for A_i‘s
a=exp(coefficients(reg)[1]+c(0,coefficients(reg)[2:5])) a/a[1] religion2 religion3 religion4 religion5 1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 A/A[1] 1 2 3 4 5 1.0000000 0.6781979 0.9624329 0.4254098 0.4307072 |
but also for B_j‘s
b=exp(coefficients(reg)[1]+c(0,coefficients(reg)[6:11])) b/b[1] occupation2 occupation3 occupation4 occupation5 occupation6 occupation7 1.0000000 0.5612551 1.8043769 1.5463210 1.0435773 1.6584203 3.5756477 B/B[1] 1 2 3 4 5 6 7 1.0000000 0.5612551 1.8043770 1.5463210 1.0435773 1.6584203 3.5756478 |
This will have major implications in non-life insurance models (for claims reserving).
Classification from scratch, logistic regression 1/8
Let us start today our series on classification from scratch…
The logistic regression is based on the assumption that given covariates \mathbf{x}, Y has a Bernoulli distribution,Y|\mathbf{X}=\mathbf{x}\sim\mathcal{B}(p_{\mathbf{x}}),~~~~p_\mathbf{x}=\frac{\exp[\mathbf{x}^T\mathbf{\beta}]}{1+\exp[\mathbf{x}^T\mathbf{\beta}]}The goal is to estimate parameter \mathbf{\beta}.
Recall that the heuristics for the use of that function for the probability is that\log[\text{odds}(Y=1)]=\log\frac{\mathbb{P}[Y=1]}{\mathbb{P}[Y=0]}=\mathbf{x}^T\mathbf{\beta}
Maximimum of the (log)-likelihood function
The log-likelihood is here\log\mathcal{L} = \sum_{i=1}^n y_i\log p_i+(1-y_i)\log (1-p_i) where p_{i}=(1+\exp[-\mathbf{x}_i^T\mathbf{\beta}])^{-1}. Numerical techniques are based on (numerical) gradient descent to compute the maximum of the likelihood function. The (negative) log-likelihood is the following function
y = myocarde$PRONO X = cbind(1,as.matrix(myocarde[,1:7])) negLogLik = function(beta){ -sum(-y*log(1 + exp(-(X%*%beta))) - (1-y)*log(1 + exp(X%*%beta))) } |
We use the minus sign since standard optimization routines compute minima, not maxima. Now, to find the minimum of that function, we need a starting point to initiate the algorithm
beta_init = lm(PRONO~.,data=myocarde)$coefficients |
Why not start with the parameter of the OLS. Somehow, we might think that at least, sign should be ok for instance. Anyway, we need a starting point, and let us use that one.
logistic_opt = optim(par = beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9)) |
Here, we obtain
logistic_opt$par (Intercept) FRCAR INCAR INSYS 1.656926397 0.045234029 -2.119441743 0.204023835 PRDIA PAPUL PVENT REPUL -0.102420095 0.165823647 -0.081047525 -0.005992238 |
Let us verify here that this output is valid. For instance, what if we change the value of the starting point (randomly)
simu = function(i){ logistic_opt_i = optim(par = rnorm(8,0,3)*beta_init, negLogLik, hessian=TRUE, method = "BFGS", control=list(abstol=1e-9)) logistic_opt_i$par[2:3] } v_beta = t(Vectorize(simu)(1:1000)) plot(v_beta) par(mfrow=c(1,2)) hist(v_beta[,1],xlab=names(myocarde)[1]) hist(v_beta[,2],xlab=names(myocarde)[2]) |
Ooops. There is a problem here. Clearly, we cannot rely on numerical optimization here. We can think about using another optimization routine
library(optimx) logit = function(mX, vBeta) { exp(mX %*% vBeta)/(1+ exp(mX %*% vBeta)) } logLikelihoodLogitStable = function(vBeta, mX, vY) { -sum(vY*(mX %*% vBeta - log(1+exp(mX %*% vBeta))) + (1-vY)*(-log(1 + exp(mX %*% vBeta)))) } likelihoodScore = function(vBeta, mX, vY) { return(t(mX) %*% (logit(mX, vBeta) - vY) ) } optimLogitLBFGS = optimx(beta_init, logLikelihoodLogitStable, method = 'L-BFGS-B', gr = likelihoodScore, mX = X, vY = y, hessian=TRUE) |
The optimum is here
attr(optimLogitLBFGS, "details")[[2]] [,1] 0.066680272 FRCAR 0.003080542 INCAR 0.079031364 INSYS -0.001586194 PRDIA 0.040500697 PAPUL -0.041870705 PVENT -0.014162756 REPUL 0.195632244 |
Let’s be honest here, I do not feel confortable with those techniques. So, what happened here ?
Here, the technique we use is based on the following idea,\mathbf{\beta}_{new}=\mathbf{\beta}_{old} -\left(\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}\right)^{-1}\cdot \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}The problem is that my computer does not know this first and second derivatives. So it will compute them using approximation techniques.
Actually, it is possible to use functions dedicated to such computation
library(numDeriv) library(MASS) logit = function(x){1/(1+exp(-x))} logLik = function(beta, X, y){ -sum(y*log(logit(X%*%beta)) + (1-y)*log(1-logit(X%*%beta))) } optim_second = function(beta, num_iter){ LL = vector() for(i in 1:num_iter){ grad = (t(X)%*%(logit(X%*%beta) - y)) H = hessian(logLik, beta, method = "complex", X = X, y = y) beta = beta - ginv(H)%*%grad LL[i] = logLik(beta, X, y) } result = list(beta, H) return(result) } |
With our OLS starting point, we obtain
opt0 = optim_second(beta_init,500) opt0[[1]] [,1] [1,] 0.951074420 [2,] 0.018860280 [3,] 0.275428978 [4,] 0.144803636 [5,] -0.058535606 [6,] 0.001182178 [7,] -0.108651776 [8,] -0.002940315 |
But if we try with another starting point
opt1 = optim_second(beta_init*runif(8),500) opt1[[1]] [,1] [1,] 0.052894794 [2,] 0.024718435 [3,] 0.167953661 [4,] 0.171662947 [5,] -0.057458066 [6,] -0.011361034 [7,] -0.107532114 [8,] -0.002679064 |
Clearly, some coefficients are rather close. But other aren’t. From my point of viezw, that is a major problem (keep in mind that we do not deal here with massive data ! There are only 7 explanatory variables, and only 71 observations).
Why not try to be clever, and use the analytical values of those derivatives ? Even if some people claim the oppositive, sometimes, it can actually be usefull to do the maths, instead of considering only numerical values.
Newton (or Fisher) Algorithm
If you open any Econometrics textbooks (one can also try to derive it), you will get \frac{\partial\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}}=\mathbf{X}^T(\mathbf{y}-\mathbf{p}_{old})
while\frac{\partial^2\log\mathcal{L}(\mathbf{\beta}_{old})}{\partial\mathbf{\beta}\partial\mathbf{\beta}^T}=-\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X}
Y=myocarde$PRONO X=cbind(1,as.matrix(myocarde[,1:7])) colnames(X)=c("Inter",names(myocarde[,1:7])) beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1) for(s in 1:9){ pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) gradient=t(X)%*%(Y-pi) omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi)) Hessian=-t(X)%*%omega%*%X beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)} |
Observe that here, I use only ten iterations of the algorithm !
beta[,8:10] [,1] [,2] [,3] XInter -10.187641685 -10.187641696 -10.187641696 XFRCAR 0.138178119 0.138178119 0.138178119 XINCAR -5.862429035 -5.862429037 -5.862429037 XINSYS 0.717084018 0.717084018 0.717084018 XPRDIA -0.073668171 -0.073668171 -0.073668171 XPAPUL 0.016756506 0.016756506 0.016756506 XPVENT -0.106776012 -0.106776012 -0.106776012 XREPUL -0.003154187 -0.003154187 -0.003154187 |
The thing is that is seems to converge extremely fast. And it is rather robust ! Look at what we get if we change our starting point
beta=as.matrix(lm(Y~0+X)$coefficients,ncol=1)*runif(8) for(s in 1:9){ pi=exp(X%*%beta[,s])/(1+exp(X%*%beta[,s])) gradient=t(X)%*%(Y-pi) omega=matrix(0,nrow(X),nrow(X));diag(omega)=(pi*(1-pi)) Hessian=-t(X)%*%omega%*%X beta=cbind(beta,beta[,s]-solve(Hessian)%*%gradient)} beta[,8:10] [,1] [,2] [,3] XInter -10.187641586 -10.187641696 -10.187641696 XFRCAR 0.138178118 0.138178119 0.138178119 XINCAR -5.862429017 -5.862429037 -5.862429037 XINSYS 0.717084013 0.717084018 0.717084018 XPRDIA -0.073668172 -0.073668171 -0.073668171 XPAPUL 0.016756508 0.016756506 0.016756506 XPVENT -0.106776012 -0.106776012 -0.106776012 XREPUL -0.003154187 -0.003154187 -0.003154187 |
Nice, isn’t it? Looks like we got our winner, don’t we? And one can use the inverse of the Hessian matrix to get standard deviations.
Weighted Least-Squares
Let us go one step further. We’ve seen that we want to compute something like\mathbf{\beta}_{new} =(\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{X})^{-1}\mathbf{X}^T\mathbf{\Delta}_{old}\mathbf{z}(if we do substitute matrices in the analytical expressions) where \mathbf{z}=\mathbf{X}\mathbf{\beta}_{old}+\mathbf{\Delta}_{old}^{-1}[\mathbf{y}-\mathbf{p}_{old}]. But actually, that’s simply a standard least-square problem\mathbf{\beta}_{new} = \text{argmin}\left\lbrace(\mathbf{z}-\mathbf{X}\mathbf{\beta})^T\mathbf{\Delta}_{old}^{-1}(\mathbf{z}-\mathbf{X}\mathbf{\beta})\right\rbraceThe only problem here is that weights \mathbf{\Delta}_{old} are functions of unknown \mathbf{\beta}_{old}. But actually, if we keep iterating, we should be able to solve it : given the \mathbf{\beta} we got the weights, and with the weights, we can use weighted OLS to get an updated \mathbf{\beta}. That’s the idea of iteratively reweighted least squares.
The algorithm will be
df = myocarde beta_init = lm(PRONO~.,data=df)$coefficients X = cbind(1,as.matrix(myocarde[,1:7])) beta = beta_init for(s in 1:1000){ p = exp(X %*% beta) / (1+exp(X %*% beta)) omega = diag(nrow(df)) diag(omega) = (p*(1-p)) df$Z = X %*% beta + solve(omega) %*% (df$PRONO - p) beta = lm(Z~.,data=df[,-8], weights=diag(omega))$coefficients } |
and the output is here
beta (Intercept) FRCAR INCAR INSYS PRDIA -10.187641696 0.138178119 -5.862429037 0.717084018 -0.073668171 PAPUL PVENT REPUL 0.016756506 -0.106776012 -0.003154187 |
which is almost what we’ve obtained before. Nice isn’t it ? Actually, here we also have standard deviations of estimators
summary( lm(Z~.,data=df[,-8], weights=diag(omega))) Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -10.187642 10.668138 -0.955 0.343 FRCAR 0.138178 0.102340 1.350 0.182 INCAR -5.862429 6.052560 -0.969 0.336 INSYS 0.717084 0.503527 1.424 0.159 PRDIA -0.073668 0.261549 -0.282 0.779 PAPUL 0.016757 0.306666 0.055 0.957 PVENT -0.106776 0.099145 -1.077 0.286 REPUL -0.003154 0.004386 -0.719 0.475 |
The standard glm function
Of course, it is possible to use an R built-in function to get our estimate
summary(glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -10.187642 11.895227 -0.856 0.392 FRCAR 0.138178 0.114112 1.211 0.226 INCAR -5.862429 6.748785 -0.869 0.385 INSYS 0.717084 0.561445 1.277 0.202 PRDIA -0.073668 0.291636 -0.253 0.801 PAPUL 0.016757 0.341942 0.049 0.961 PVENT -0.106776 0.110550 -0.966 0.334 REPUL -0.003154 0.004891 -0.645 0.519 |
Application and visualisation
Let us visualize the prediction obtained from the logistic regression, on our second dataset
x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) reg = glm(y~x1+x2,data=df,family=binomial(link = "logit")) u = seq(0,1,length=101) p = function(x,y) predict.glm(reg,newdata=data.frame(x1=x,x2=y),type="response") v = outer(u,u,p) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10,breaks=(0:10)/10) points(x,y,pch=19,cex=1.5,col="white") points(x,y,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) |
Here level curves – or iso-probabilities – are linear, so the space is divided in two (0 and 1, survival and death, white and black) by a straight line (or an hyperplane in higher dimension). Furthermore, since we have a linear model, if we change the cutoff (the threshold used to create the two classes), we obtain another straight line (or hyperplane) parallel to the first one.
Next time, we will introduce splines to smooth those continuous covariates… to be continued.
Classification from scratch, overview 0/8
Before my course on « big data and economics » at the university of Barcelona in July, I wanted to upload a series of posts on classification techniques, to get an insight on machine learning tools.
According to some common idea, machine learning algorithms are black boxes. I wanted to get back on that saying. First of all, isn’t it the case also for regression models, like generalized additive models (with splines) ? Do you really know what the algorithm is doing ? Even the logistic regression. In textbooks, we can easily find math formulas. But what is really done when I run it, in R ?
When I started working on academia, someone told me something like « if you really want to understand a theory, teach it ». And that has been my moto for more than 15 years. I wanted to add a second part to that statement: « if you really want to understand an algorithm, recode it ». So let’s try this… My ambition is to recode (more or less) most of the standard algorithms used in predictive modeling, from scratch, in R. What I plan to mention, within the next two weeks, will be
- the logistic regression
- the logistic regression with splines
- the logistic regression with kernels (and knn)
- the penalized logistic regression, ridge
- the penalized logistic regression, lasso
- the heuristics of neural nets
- an introduction to SVM
- classification trees
- bagging and random forests
- gradient boosting (and adaboost)
I will use two datasets to illustrate. The first one is inspired by the cover of « Foundations of Machine Learning » by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar. At least, with this dataset, it will be possible to plot predictions (since there are only two – continuous – features)
x = c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) y = c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) z = c(1,1,1,1,1,0,0,1,0,0) df = data.frame(x1=x,x2=y,y=as.factor(z)) plot(x,y,pch=c(1,19)[1+z]) |
Here is some code to get a visualization of the prediction (here the probability to be a black point)
rmatrix_model = function(model){ u = seq(0,1,length=101) p = function(x,y) predict(model,newdata=data.frame(x1=x,x2=y),type="response") v = outer(u,u,p) return(v)} nice_graph=function(v){ u = seq(0,1,length=101) image(u,u,v,xlab="Variable 1",ylab="Variable 2",col=clr10[c(1,10)],breaks=c(0,5,10)/10) points(x,y,pch=19,cex=1.5,col="white") points(x,y,pch=c(1,19)[1+z],cex=1.5) contour(u,u,v,levels = .5,add=TRUE) } reg = glm(y~x1+x2,data=df,family=binomial) nice_graph(rmatrix_model(reg)) |
Note that colors are defined here as
clr10= c("#ffffff","#f7fcfd","#e5f5f9","#ccece6","#99d8c9","#66c2a4","#41ae76","#238b45","#006d2c","#00441b") |
or with some nonlinear model
The second one is a dataset I got from Gilbert Saporta, about heart attacks and decease (our binary variable).
myocarde = read.table("http://freakonometrics.free.fr/myocarde.csv",head=TRUE, sep=";") myocarde$PRONO = (myocarde$PRONO=="SURVIE")*1 y = myocarde$PRONO X = as.matrix(cbind(1,myocarde[,1:7])) |
So far, I do not plan to talk (too much) on the choice of tunning parameters (and cross-validation), on comparing models, etc. The goal here is simply to understand what’s going on when we call either glm, glmnet, gam, random forest, svm, xgboost, or any function to get a predict model.
Actuariat de l’Assurance Non-Vie #7
Après pas mal de temps passé sur les modèles de comptage, on va commencer la modèlisation des coûts individuels. Les slides sont en ligne.
Actuariat de l’Assurance Non-Vie #6
On reviendra sur les modèles de comptage, après la digression sur les GLM et la sur-dispersion, en parlant un peu de tarification a posteriori. En particulier, nous verrons deux approches assez différentes, avec les modèles de crédibilité et l’approche par bonus-malus. Les slides sont en ligne.
Actuariat de l’Assurance Non-Vie #5
Petit retour sur les modèles de comptage, avec la digression sur les GLM, avec les modèles permettant d’avoir de la surdispersion (théoriquement impossible dans un modèle de Poisson, mais plus réaliste compte tenu des particularités des données d’assurance). Les slides sont en ligne.
Actuariat de l’Assurance Non-Vie #4
Ce mardi, poursuite du cours d’actuariat, avec les modèles linéaires généralisés (qui englobent la régression logistique et la régression de Poisson, sur lesquelles nous avons passé du temps). Les slides sont en ligne.
Multinomial Logit as an Iterated Logit Regression
For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions.
Consider here a response variable Y with a multinomial distribution (3 factors to have something more general than the binomial), taking values \{A,B,C\}, with respective probabilities \mathbf{p}=(p_A,p_B,p_C). Here is a code to generate some multinomial variables
msample=function(A,B,C){
Y=rep(NA,B)
for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])}
return(Y)
}
and here is a code to generate a dataset with n rows,
generate3=function(n,x,pb=c(-2,0)){
set.seed(x)
X1=runif(n)
X2=runif(n)
X3=runif(n)
s1=pb[1]+X1+X2
s2=pb[2]-X1+X2
P1=exp(s1)/(1+exp(s1)+exp(s2))
P2=exp(s2)/(1+exp(s1)+exp(s2))
Y=msample(0:2,n,cbind(1-P1-P2,P1,P2))
df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3)
return(df)
}
Let us generate a training dataset and a validation one
pb=c(.31,.42)
DF1=generate3(1000,1,pb=pb)
DF2=generate3(500,2,pb=pb)
With a multivariate logistic regression
\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
For convenience, consider the most popular factor in our training dataset
modalite=names(sort(table(DF1$Y),decreasing = TRUE))
Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions.
library(nnet)
reg=multinom(as.factor(Y) ~ ., data = DF1)
mp1=predict (reg, DF1, "probs")
mp2=predict (reg, DF2, "probs")
An alternative can be the following.
consider a first regression model on the Bernoulli variable Y_A=\mathbf{1}(Y=A). Actually, we will consider the most important factor, but for convenience, assume that it is A.
\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}
On our dataset, estimate that model, and get predictions. In the case where Y\neq A, define another Bernoulli variable Y_B=\mathbf{1}(Y=B|Y\neq A). We can estimate that model and derive two probabilities, \mathbb{P}(Y=B|Y\neq A) and \mathbb{P}(Y=C|Y\neq A) (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. \mathbb{P}[Y=A] is obtained from the first model, and we can derive the other two from \mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A] and \mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A].
reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial)
reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial)
p11=predict (reg1, newdata=DF1, type="response")
p12=predict (reg2, newdata=DF1, type="response")
p21=predict (reg1, newdata=DF2, type="response")
p22=predict (reg2, newdata=DF2, type="response")
mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12))
mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22))
colnames(mmp1)=colnames(mmp2)=modalite
Let us compare the predicted probabilites, on the same dataset (here the training dataset)
> mmp1[1:9,c("0","1","2")]
0 1 2
1 0.19728737 0.4991805 0.3035321
2 0.17244580 0.5648537 0.2627005
3 0.19291753 0.5971058 0.2099767
4 0.09087176 0.7787304 0.1303978
5 0.23400225 0.4083022 0.3576955
6 0.18063647 0.6637352 0.1556283
7 0.13188881 0.7402710 0.1278401
8 0.13776970 0.6524959 0.2097344
9 0.12325864 0.6790336 0.1977078
> mp1[1:9,c("0","1","2")]
0 1 2
1 0.19691036 0.5022692 0.3008205
2 0.17123189 0.5680647 0.2607034
3 0.19293066 0.5984402 0.2086291
4 0.08821851 0.7813318 0.1304497
5 0.23470739 0.4109990 0.3542936
6 0.18249687 0.6602168 0.1572863
7 0.13128711 0.7400898 0.1286231
8 0.13525341 0.6553618 0.2093848
9 0.12090016 0.6815915 0.1975084
The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions.