Next Monday, I will have one hour to present our work, with Olivier Cabrignac and Ewen Gallic, on “Modeling Joint Lives within Families”, at the actuarial seminar at UConn. Slides are now online
Tag Archives: mortality
Talk at the risk and actuarial seminar, Sydney, Australia
In two weeks (Wednesday, 13th) I will be giving a talk at the risk and actuarial seminar, at UNSW, in Sydney. I will have one hour to present our work, with Olivier Cabrignac and Ewen Gallic, on “Modeling Joint Lives within Families”. Slides are now available
PhD Defense in Lyon
Today, I will go to Lyon for the PhD defense of Edouard Debonneuil (that will be on Monday morning)
His thesis is on financial impacts of mortality improvements
Several models and scenarios are considered…
Probably more on that very interesting (and important) topic soon.
Proportion of people alive in 1945 that are still alive
In demography, we like to use life tables to estimate the probability that someone born in 1945 (say) is still alive nowadays. But another interesting quantity might be the probability that someone alive in 1945 is still alive nowadays.
The main difference is that we do not know when that person, alive in 1945, was born. Someone who was old in 1945 is very unlikely still alive in 2017. To compute those probabilities, we can use datasets from http://www.mortality.org/hmd/. More precisely, we need both death and birth data. I assume that datasets (text files) were downloaded (it is necessary to register – for free – to get the data).
D=read.table("FRDeaths_1x1.txt",skip=1,header=TRUE)
B=read.table("FRBirths.txt",skip=1,header=TRUE)
In the death dataset, there is a “110+” for people older than 110 years. For convenience, let us cap our observations at 110 years old,
D$Age=as.numeric(as.character(D$Age))
D$Age[is.na(D$Age)]=110
Consider now a first function that will return, for people born in 1930 (say) two informations
- the number of people (here, let us consider women only) born in 1930 (from the birth database)
- the number of death of people of age 0 in 1930, people of age 1 in 1931, people of age 2 in 1932, etc…
The code is simple
nb=function(y=1930){
debut=1816
MatDFemale=matrix(D$Female,nrow=111)
colnames(MatDFemale)=debut+0:198
cly=y-debut+1:111
deces=diag(MatDFemale[,cly[cly%in%1:199]])
return(c(B$Female[B$Year==y],deces))}
We have a single number for the number of births, and then a vector for the number of deaths. Consider now another function. Consider the people born in 1930. We want to get two numbers : the number of people still alive in 1945 (say), and the number of people still alive nowadays. The ratio will be the proportion of people born in 1930 that were alive in 1945, that are still alive in 2015.
pop=function(ne=1930,an=1945){
comptage=nb(ne)
s=0
if(an>ne) s=sum(comptage[seq(2,1+an-ne)])
p1=max(comptage[1]-s,0)
p2=max(p1-sum(comptage[seq(2+an-ne,length(comptage))]),0)
c(p1,p2)
}
Then, for a given year (say 1945), to get the proportion of people alive in 1945 that are still alive today, we need to count how many people born in 1944 were still alive in 1945, and in 2015, but also born in 1943, 1942, etc, And we simply consider the ratio of the total number of people alive in 2015 over the total number of people alive in 1945
ptn=function(y=1945){
V=Vectorize(function(x) pop(x,y))(1816:y)
sum(V[2,!is.na(V[2,])])/sum(V[1,!is.na(V[1,])])
}
Hence, 22% of those alive in 1945 are still alive in 2015,
> ptn(1945)
[1] 0.2209435
Actually, instead of looking only at 1945, it is possible to get a plot
P=Vectorize(ptn)(1900:2010)
plot(1900:2010,P,type="l",ylim=0:1)
For instance,
> ptn(1975)
[1] 0.6377413
i.e. 63.7% of those alive in 1975 are stil alive 40 years after. That is a rather interesting function, I was surprised that I couldn’t find it is standard demographical R package…
Mortality by Weekday and Age
A few days ago, I did mention on Twitter a nice graph, with
Mortality by Weekday and Age https://t.co/LyzQ7nJABZ very interesting difference, young vs. old pic.twitter.com/EfrX0C1GBS
— Arthur Charpentier (@freakonometrics) 27 février 2016
My colleague Jean-Philippe was extremely sceptical, so I tried to reproduce that graph. The good thing is that we have the Social Security Death Master File, for data in the US. To be more specific, I have three big files on my hard drive, and in order to reproduce that graph, we’ll load the data by chunks. But before, because we have the day of birth, and the day of death, I need a function to compute the age. So here it is
> age_years <- function(earlier, later) + { + lt <- data.frame(earlier, later) + age <- as.numeric(format(lt[,2],format="%Y")) - as.numeric(format(lt[,1],format="%Y")) + dayOnLaterYear <- ifelse(format(lt[,1],format="%m-%d")!="02-29", + as.Date(paste(format(lt[,2],format="%Y"),"-",format(lt[,1],format="%m-%d"),sep="")), + ifelse(as.numeric(format(later,format="%Y")) %% 400 == 0 | as.numeric(format(later,format="%Y")) %% 100 != 0 & as.numeric(format(later,format="%Y")) %% 4 == 0, + as.Date(paste(format(lt[,2],format="%Y"),"-",format(lt[,1],format="%m-%d"),sep="")), + as.Date(paste(format(lt[,2],format="%Y"),"-","02-28",sep="")))) + age[which(dayOnLaterYear > lt$later)] <- age[which(dayOnLaterYear > lt$later)] - 1 + age + }
from github.com/nzcoops. Now, it is possible to create a similar table, based on that huge file (we have almost 50 million observations)
> cols <- c(1,9,20,4,15,15,1,2,2,4,2,2,4,2,5,5,7) > noms_col <- c ("code","ssn","last_name","name_suffix","first_name","middle_name","VorPCode","date_death_m","date_death_d","date_death_y","date_birth_m","date_birth_d","date_birth_y","state","zip_resid","zip_payment","blanks") > library(LaF) > TABLE_AGE_DAY=function(temp = "ssdm3"){ + ssn <- laf_open_fwf( temp,column_widths = cols,column_types=rep("character",length(cols) ),column_names = noms_col,trim = TRUE) + object.size(ssn) + go_through <- seq(1,nrow(ssn),by = 1e05 ) + if(go_through[ length(go_through)] != nrow( ssn)) go_through <- c(go_through,nrow( ssn)) + go_through <- cbind(go_through[-length(go_through)],c(go_through[-c(1,length(go_through)) ]-1,go_through [ length(go_through)])) + go_through + + pb <- txtProgressBar(min = 0, max = nrow( go_through), style = 3) + count_birthday <- function(s){ + #print(s) + setTxtProgressBar(pb, s) + data <- ssn[ go_through[s,1]:go_through[s,2],c("date_death_y","date_death_m","date_death_d", + "date_birth_y","date_birth_m","date_birth_d")] + date1=as.Date(paste(data$date_birth_y,"-",data$date_birth_m,"-",data$date_birth_d,sep=""),"%Y-%m-%d") + date2=as.Date(paste(data$date_death_y,"-",data$date_death_m,"-",data$date_death_d,sep=""),"%Y-%m-%d") + idx=which(!(is.na(date1)|is.na(date2))) + date1=date1[idx] + date2=date2[idx] + itg=try(age<-age_years(date1,date2),silent=TRUE) + if(inherits(itg, "try-error")) age=trunc((date2-date1)/365.25) + w=weekdays(date2) + T=table(age,w) + Tab=matrix(0,106,7) + for(i in 1:nrow(T)) if(as.numeric(rownames(T)[i])<106) Tab[as.numeric(rownames(T)[i]),]=T[i,] + return(Tab) + } + D <- lapply( seq_len(nrow( go_through)),count_birthday) + T=D[[1]] + for(s in 2:length(D)) T=T+D[[s]] + return(T) + }
If we run that function on the three files
> D1=TABLE_AGE_DAY("ssdm1") |========================================| 100% > D2=TABLE_AGE_DAY("ssdm2") |========================================| 100% > D3=TABLE_AGE_DAY("ssdm3") |========================================| 100%
we can visualize not percentages, as on the figure above, but counts
> D=D1+D2+D3 > colnames(D)= c("Sun","Thu","Mon","Tue","Wed","Sat","Fri") > D=D1[, c("Sun","Mon","Tue","Wed","Thu","Fri","Sat")]
and we have here (I remove the Saturday to get a better output)
> D[,1:6] Sun Mon Tue Wed Thu Fri [1,] 2843 2888 2943 3020 2979 3038 [2,] 2007 1866 1918 1974 1990 2137 [3,] 1613 1507 1532 1530 1515 1613 [4,] 1322 1256 1263 1259 1207 1330 [5,] 1155 1061 1092 1128 1112 1171 [6,] 1067 985 950 1082 1009 1055 [7,] 1129 901 915 954 941 1044 [8,] 1026 927 944 935 911 1005 [9,] 1029 1012 871 908 939 998 [10,] 1093 1011 974 958 928 1018 [11,] 1106 1031 1019 1036 1087 1122 [12,] 1289 1219 1176 1215 1141 1292 [13,] 1618 1455 1487 1484 1466 1633 [14,] 2121 2000 1900 1941 1845 2138 [15,] 2949 2647 2519 2499 2524 2748 [16,] 4488 3885 3798 3828 3747 4267 [17,] 5709 4612 4520 4422 4443 5005 [18,] 7280 5618 5400 5271 5344 5986 [19,] 8086 6172 5833 5820 6004 6628 [20,] 8389 6507 6166 6055 6430 6955 [21,] 8794 7038 6794 6628 6841 7572 [22,] 8578 6528 6512 6472 6757 7342 [23,] 8345 6750 6483 6469 6714 7338 [24,] 8361 6859 6589 6623 6854 7369 [25,] 8398 6974 6892 6766 6964 7613 [26,] 8432 7210 7012 7175 7343 7801 [27,] 8757 7641 7526 7352 7674 7950 [28,] 9190 8041 7843 7851 7940 8268 [29,] 9495 8409 8555 8400 8469 8934 [30,] 9876 9041 9015 9166 9106 9641 [31,] 10567 9952 9506 9634 9770 10212 [32,] 11417 10428 10402 10275 10455 11169 [33,] 11992 11306 11124 11095 11243 11749 [34,] 12665 12327 11760 12025 12137 12443 [35,] 13629 13135 13179 13037 12968 13724 [36,] 14560 14009 13927 13822 14105 14436 [37,] 15660 14990 15013 15009 15101 15700 [38,] 16749 16504 16148 16091 15912 16863 [39,] 17815 17760 17519 17144 17553 17943 [40,] 19366 19057 18918 18517 18760 19604 [41,] 20770 20458 20154 20339 20349 21238 [42,] 21962 22194 22020 21499 21690 22347 [43,] 23803 23922 23701 23681 23437 24227 [44,] 25685 26133 25559 25209 25287 26115 [45,] 27506 28110 27363 27042 27272 28228 [46,] 29366 29744 29555 29245 29678 30444 [47,] 31444 32193 31817 31504 31753 32302 [48,] 33452 34719 33529 33954 33441 34618 [49,] 36186 37150 36005 36064 36226 37138 [50,] 38401 39244 38813 38465 38506 39884 [51,] 40331 41830 41168 41110 40937 42014 [52,] 43181 44351 43975 43949 43579 44734 [53,] 45307 47134 46522 46149 46089 47286 [54,] 47996 49441 49139 48678 48629 49903 [55,] 50635 52424 51757 51433 51477 52550 [56,] 53509 55337 54556 54482 54406 55906 [57,] 55703 58482 58016 57400 57097 58758 [58,] 59016 61453 60652 61024 60557 62473 [59,] 62475 65651 64169 63824 63829 65592 [60,] 66621 69185 68885 68217 68752 69963 [61,] 69759 73144 72421 71784 71745 73414 [62,] 80346 84253 83044 83177 82416 83833 [63,] 86851 90059 89002 88985 89245 90334 [64,] 91839 95465 94602 93985 94154 96195 [65,] 98461 102846 101348 101328 101306 103170 [66,] 104569 108722 107768 107711 107729 109350 [67,] 111230 115477 114418 114743 113935 116356 [68,] 116999 122053 120727 120342 119782 122926 [69,] 123695 128339 127184 126822 126639 129037 [70,] 129956 136123 134555 135120 133842 137390 [71,] 137984 142964 141316 142855 141419 143620 [72,] 145132 150708 148407 149345 149448 151910 [73,] 152877 157993 155861 156349 155924 158725 [74,] 159109 164652 162722 163499 163157 165744 [75,] 165848 172121 170730 170482 170585 173431 [76,] 172457 179036 177185 177328 177392 180215 [77,] 179936 185015 183223 183932 183237 186663 [78,] 185900 191053 189986 189730 189639 193038 [79,] 191498 196694 194246 194810 195246 197812 [80,] 195505 201289 199684 199561 198968 203226 [81,] 199031 204927 202204 202622 202951 205792 [82,] 201589 207928 204929 204001 204396 208224 [83,] 201665 206743 205194 204676 205256 207980 [84,] 200965 205653 203422 202393 203422 206012 [85,] 197445 202692 199498 199730 200075 201728 [86,] 192324 195961 193589 194754 193800 196102 [87,] 183732 188063 185153 186104 186021 188176 [88,] 174258 177474 175822 176078 176761 177449 [89,] 163180 166706 162810 164367 164281 166436 [90,] 149169 151738 150148 150212 150535 152435 [91,] 134218 136866 134959 134922 135027 136381 [92,] 118936 121106 119591 119509 119793 120998 [93,] 102734 104955 102944 102865 103345 104776 [94,] 87418 88885 88023 86963 87546 87872 [95,] 72023 72698 72151 71579 71530 72287 [96,] 56985 58238 57478 57319 57163 57615 [97,] 44447 45058 44607 44469 43888 44868 [98,] 33457 34132 33022 33409 33454 33642 [99,] 24070 24317 24305 24089 24020 24383 [100,] 17165 17295 16755 17115 16957 17207 [101,] 11799 12125 11709 11816 11824 11719 [102,] 7714 7741 7959 7691 7648 7633 [103,] 5024 5012 4822 4792 4882 4916 [104,] 2987 3101 2978 3049 3093 2906 [105,] 1781 1894 1811 1756 1734 1834
So clearly, for young people, the number of deaths is rather small…
And to visualize it, as above, we can use
> P=D/apply(D,1,sum)*100 > range(P) [1] 12.34857 17.59386 > dP=trunc((P-min(P))/(max(P)+.01-min(P))*11) > library(RColorBrewer) > CLR=rev(brewer.pal(name="RdYlBu", 11)) > plot(0:1,0:1,ylim=c(55,110),xlim=c(-1,7)) > for(i in 1:106){ + for(j in 1:7){ + rect(j-1,108-i,j,107-i,col=CLR[dP[i,j]]) + }} > text(rep(-.5,106),107.5-1:106,0:105,cex=.4)
As above, we observe a strong difference among weekdays for the date of death for young people (below 30) which disappear after (even if there is still a sunday effect)
Reinterpreting Lee-Carter Mortality Model
Last week, while I was giving my crash course on R for insurance, we’ve been discussing possible extensions of Lee & Carter (1992) model. If we look at the seminal paper, the model is defined as follows
Men set to live as long as women by 2030?
A few months ago, in Men set to live as long as women, figures show, it was mentioned that (in the U.K.)
the gap between male and female life expectancy is closing and men could catch up by 2030, according to an adviser for the Office for National Statistics.
(the slides are available online http://cass.city.ac.uk/…).
Your Life in Weeks
This week, I discovered a picture on http://waitbutwhy.com/, which represent a (so-called) typical human life, in weeks,
I found that interesting. But the first problem is that I don’t understand the limit, below: 90 years, that’s not the average life length. That’s not what you should expect to live when you get born. The second problem is that it cannot be as static as it might seem, when you look at the picture. I mean, life expectancy at age 0 is not the same as life expectancy at age 30, or 50. So I did try to make an animated graph, using prospective life tables. Here a code to generate life tables, at different period, for a French population (I distinguish, here male and female)
library(demography) france.LC1 <- lca(fr.mort,adjust="e0",series="female",years=c(1900,2100)) france.fcast <- forecast(france.LC1,h=100) L2 <- lifetable(france.fcast) ex2=L2$ex L1=lifetable(fr.mort,series="female") ex1=L1$ex exF=cbind(ex1,ex2) france.LC1 <- lca(fr.mort,adjust="e0",series="male",years=c(1900,2100)) france.fcast <- forecast(france.LC1,h=100) L2 <- lifetable(france.fcast) ex2=L2$ex L1=lifetable(fr.mort,series="male") ex1=L1$ex exM=cbind(ex1,ex2) Y=colnames(exF)
Based on those lifetables, we can extract remaining life expectancy, at various ages (say, for instance 50, 51, 52, etc), for someone born on some given year (say 1950). Based on those expected remaining lifetimes, we can plot
picture=function(yearborn=1950,age=50){ k=which(Y==yearborn) M=diag(exM[,k+0:100]) F=diag(exF[,k+0:100]) par(mfrow=c(1,2)) va=0:(52*100-1) plot(va%%52,va%/%52,cex=.6,pch=15,col=c("light yellow","light blue","white")[1+ (va>=age*52)*1+(va>(age+M[age+1])*52)*1],ylim=c(100,0),axes=FALSE,xlab="Week", ylab="Age",main=paste("Man, born on ",yearborn, ", age ",age,sep="")) axis(1) axis(2) plot(va%%52,va%/%52,cex=.6,pch=15,col=c("light yellow","pink","white")[1+ (va>=age*52)*1+(va>(age+F[age+1])*52)*1],ylim=c(100,0),axes=FALSE,xlab="Week", ylab="Age",main=paste("Woman, born on ",yearborn, ", age ",age,sep="")) axis(1) axis(2)}
For instance, if we want the graph above, for someone age 30, born in 1980, we use
picture(1980,30)
Now, if we run a code to get an animated gif, we can get, for someone born in 1950,
and for someone born in 2000
Now, if I could get historical datasets, with the average time spent in schools, ages of retirement, etc, I guess I could add it on the graph. But that’s another story…
Regression on categorical variables
This morning, Stéphane asked me tricky question about extracting coefficients from a regression with categorical explanatory variates. More precisely, he asked me if it was possible to store the coefficients in a nice table, with information on the variable and the modality (those two information being in two different columns). Here is some code I did to produce the table he was looking for, but I guess that some (much) smarter techniques can be used (comments – see below – are open). Consider the following dataset
> base x sex hair 1 1 H Black 2 4 F Brown 3 6 F Black 4 6 H Black 5 10 H Brown 6 5 H Blonde
with two factors,
> levels(base$hair) [1] "Black" "Blonde" "Brown" > levels(base$sex) [1] "F" "H"
Let us run a (standard linear) regression,
> reg=lm(x~hair+sex,data=base)
which is here
> summary(reg) Call: lm(formula = x ~ hair + sex, data = base) Residuals: 1 2 3 4 5 6 -3.714e+00 -2.429e+00 2.429e+00 1.286e+00 2.429e+00 -2.220e-16 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.5714 3.4405 1.038 0.408 hairBlonde 0.2857 4.8655 0.059 0.959 hairBrown 2.8571 3.7688 0.758 0.528 sexH 1.1429 3.7688 0.303 0.790 Residual standard error: 4.071 on 2 degrees of freedom Multiple R-squared: 0.2352, Adjusted R-squared: -0.9121 F-statistic: 0.205 on 3 and 2 DF, p-value: 0.886
If we want to extract the names of the factors (assuming here that there are no numbers in the name of the factor), and the values of the associated modality, one can use
> VARIABLE=c("",gsub("[-^0-9]", "", names(unlist(reg$xlevels)))) > MODALITY=c("",as.character(unlist(reg$xlevels))) > names=data.frame(VARIABLE,MODALITY,NOMVAR=c( + "(Intercept)",paste(VARIABLE,MODALITY,sep="")[-1])) > regression=data.frame(NOMVAR=names(coefficients(reg)), + COEF=as.numeric(coefficients(reg))) > merge(names,regression,all.x=TRUE) NOMVAR VARIABLE MODALITE COEF 1 (Intercept) 3.5714286 2 hairBlack hair Black NA 3 hairBlonde hair Blonde 0.2857143 4 hairBrown hair Brown 2.8571429 5 sexF sex F NA 6 sexH sex H 1.1428571
or, if we want modalities exluding references,
> merge(names,regression) NOMVAR VARIABLE MODALITE COEF 1 (Intercept) 3.5714286 2 hairBlonde hair Blonde 0.2857143 3 hairBrown hair Brown 2.8571429 4 sexH sex H 1.1428571
In order to reproduce the table Stéphane sent me, let us use the following code to produce an html table,
> library(xtable) > htlmtable <- xtable(merge(names,regression)) > print(htlmtable,type="html")
NOMVAR | VARIABLE | MODALITY | COEF | |
---|---|---|---|---|
1 | (Intercept) | 3.57 | ||
2 | hairBlonde | hair | Blonde | 0.29 |
3 | hairBrown | hair | Brown | 2.86 |
4 | sexH | sex | H | 1.14 |
So yes, it is possible to build a table with the variable, modalities, and coefficients. This function can be interesting on prospective mortality, when we do have a large number of modalities per factor (years, ages and year of birth). Consider the following datasets
> DEATH=read.table( + "http://freakonometrics.free.fr/DeathsSwitzerland.txt", + header=TRUE,skip=2) > EXPOSURE=read.table( + "http://freakonometrics.free.fr/ExposuresSwitzerland.txt", + header=TRUE,skip=2) > DEATH$Age=as.numeric(as.character(DEATH$Age)) > DEATH=DEATH[-which(is.na(DEATH$Age)),] > EXPOSURE$Age=as.numeric(as.character(EXPOSURE$Age)) > EXPOSURE=EXPOSURE[-which(is.na(EXPOSURE$Age)),] > base=data.frame(y=as.factor(DEATH$Year),a=as.factor(DEATH$Age), + c=as.factor(DEATH$Year-DEATH$Age),D=DEATH$Total,E= EXPOSURE$Total) > base=base[base$E>0,]
and the following nonlinear model, based on Lee-Carter model (including a cohort effect),
can be estimated using
> library(gnm) > reg=gnm(D~a+Mult(a,y)+Mult(a,c),offset=log(E),family=poisson,data=base)
In order to extract the 671 coefficients from the regresssion,
> length(coefficients(reg)) [1] 671
(as properly as possible) we have to be careful: names of coefficients are not that simple to handle. For instance, we can see things like
> coefficients(reg)[200] Mult(., year).age98 0.04203519
In order to extract them, define
> na=length((reg$xlevels)$age) > ny=length((reg$xlevels)$year) > nc=length((reg$xlevels)$cohort) > VARIABLElong=c("",rep("age",na),rep("Mult(., year).age",na), + rep("Mult(a, .).y",ny), + rep("Mult(., cohort).age",na),rep("Mult(age, .).cohort",nc)) > VARIABLEshort=c("",rep("age",na),rep("age",na),rep("year",ny), + rep("age",na),rep("cohort",nc)) > MODALITY=c("",(reg$xlevels)$age,(reg$xlevels)$age, + (reg$xlevels)$year,(reg$xlevels)$age,(reg$xlevels)$cohort) > names=data.frame(VARIABLElong,VARIABLEshort, + MODALITY,NOMVAR=c("(Intercept)",paste(VARIABLElong,MODALITY,sep="")[-1])) > regression=data.frame(NOMVAR=names(coefficients(reg)), + COEF=as.numeric(coefficients(reg)))
Here we go, now we have the coefficients from the regression in a nice table,
> outputreg=merge(names,regression) > outputreg[1:10,] NOMVAR VARIABLElong VARIABLEshort MODALITY COEF 1 (Intercept) -8.22225458 2 age1 age age 1 -0.87495451 3 age10 age age 10 -1.67145704 4 age100 age age 100 4.91041650 5 age11 age age 11 -1.00186990 6 age12 age age 12 -1.05953497 7 age13 age age 13 -0.90952859 8 age14 age age 14 0.02880668 9 age15 age age 15 0.42830738 10 age16 age age 16 1.35961403
It is now possible to plot all the coefficients, as functions of the age, the year of observation, or the year of birth. For instance, for the standard average age effect (namely as a function of ), we can use
> typevariable=as.character(unique(outputreg$VARIABLElong)) > basegraph=outputreg[outputreg$VARIABLElong==typevariable[2],] > x=as.numeric(as.character(basegraph$MODALITY)) > y=basegraph$COEF > plot(x,y,type="p",col="blue",xlab="Age")
while the cohort effect ( as a function of ) is obtained using
> basegraph=outputreg[outputreg$VARIABLElong==typevariable[5],] > x=as.numeric(as.character(basegraph$MODALITY)) > y=basegraph$COEF > plot(x,y,type="p",col="blue",xlab="Cohort (year of birth)",ylim=c(0,10))
Visualization in regression analysis
> library(gdata) > XLS1=read.xls("http://api.worldbank.org/datafiles /NY.GDP.PCAP.PP.CD_Indicator_MetaData_en_EXCEL.xls", sheet = 1) > data1=XLS1[-(1:28),c("Country.Name","Country.Code","X2010")] > names(data1)[3]="GDP" > XLS2=read.xls("http://api.worldbank.org/datafiles /SH.DYN.MORT_Indicator_MetaData_en_EXCEL.xls", sheet = 1) > data2=XLS2[-(1:28),c("Country.Code","X2010")] > names(data2)[2]="MORTALITY" > data=merge(data1,data2) > head(data) Country.Code Country.Name GDP MORTALITY 1 ABW Aruba NA NA 2 AFG Afghanistan 1207.278 149.2 3 AGO Angola 6119.930 160.5 4 ALB Albania 8817.009 18.4 5 AND Andorra NA 3.8 6 ARE United Arab Emirates 47215.315 7.1
If we estimate a simple linear regression – – we get
> regBB=lm(MORTALITY~GDP,data=data) > summary(regBB) Call: lm(formula = MORTALITY ~ GDP, data = data) Residuals: Min 1Q Median 3Q Max -45.24 -29.58 -12.12 16.19 115.83 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 67.1008781 4.1577411 16.139 < 2e-16 *** GDP -0.0017887 0.0002161 -8.278 3.83e-14 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 39.99 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.2909, Adjusted R-squared: 0.2867 F-statistic: 68.53 on 1 and 167 DF, p-value: 3.834e-14
We can look at the scatter plot, including the linear regression line, and some confidence bounds,
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5)",cex=.5) > text(data$GDP,data$MORTALITY,data$Country.Name,pos=3) > x=seq(-10000,100000,length=101) > y=predict(regBB,newdata=data.frame(GDP=x), + interval="prediction",level = 0.9) > lines(x,y[,1],col="red") > lines(x,y[,2],col="red",lty=2) > lines(x,y[,3],col="red",lty=2)
We should be able to do a better job here. For instance, if we look at the Box-Cox profile likelihood,
> boxcox(regBB)
it looks like taking the logarithm of the mortality rate should be better, i.e. or :
> regLB=lm(log(MORTALITY)~GDP,data=data) > summary(regLB) Call: lm(formula = log(MORTALITY) ~ GDP, data = data) Residuals: Min 1Q Median 3Q Max -1.3035 -0.5837 -0.1138 0.5597 3.0583 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 3.989e+00 7.970e-02 50.05 <2e-16 *** GDP -6.487e-05 4.142e-06 -15.66 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.7666 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.5949, Adjusted R-squared: 0.5925 F-statistic: 245.3 on 1 and 167 DF, p-value: < 2.2e-16 > plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5) log scale",cex=.5,log="y") > text(data$GDP,data$MORTALITY,data$Country.Name) > x=seq(300,100000,length=101) > y=exp(predict(regLB,newdata=data.frame(GDP=x)))* + exp(summary(regLB)$sigma^2/2) > lines(x,y,col="red") > y=qlnorm(.95, meanlog=predict(regLB,newdata=data.frame(GDP=x)), + sdlog=summary(regLB)$sigma^2) > lines(x,y,col="red",lty=2) > y=qlnorm(.05, meanlog=predict(regLB,newdata=data.frame(GDP=x)), + sdlog=summary(regLB)$sigma^2) > lines(x,y,col="red",lty=2)
on the log scale or
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita", + ylab="Mortality rate (under 5) log scale",cex=.5)
on the standard scale. Here we use quantiles of the log-normal distribution to derive confidence intervals.
But why shouldn’t we take also the logarithm of the GDP ? We can fit a model or equivalently .
> regLL=lm(log(MORTALITY)~log(GDP),data=data) > summary(regLL) Call: lm(formula = log(MORTALITY) ~ log(GDP), data = data) Residuals: Min 1Q Median 3Q Max -1.13200 -0.38326 -0.07127 0.26610 3.02212 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 10.50192 0.31556 33.28 <2e-16 *** log(GDP) -0.83496 0.03548 -23.54 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.5797 on 167 degrees of freedom (47 observations deleted due to missingness) Multiple R-squared: 0.7684, Adjusted R-squared: 0.767 F-statistic: 554 on 1 and 167 DF, p-value: < 2.2e-16 > plot(data$GDP,data$MORTALITY,xlab="GDP per capita ", + ylab="Mortality rate (under 5)",cex=.5,log="xy") > text(data$GDP,data$MORTALITY,data$Country.Name) > x=exp(seq(1,12,by=.1)) > y=exp(predict(regLL,newdata=data.frame(GDP=x)))* + exp(summary(regLL)$sigma^2/2) > lines(x,y,col="red") > y=qlnorm(.95, meanlog=predict(regLL,newdata=data.frame(GDP=x)), + sdlog=summary(regLL)$sigma^2) > lines(x,y,col="red",lty=2) > y=qlnorm(.05, meanlog=predict(regLL,newdata=data.frame(GDP=x)), + sdlog=summary(regLL)$sigma^2) > lines(x,y,col="red",lty=2)
on the log scales or
> plot(data$GDP,data$MORTALITY,xlab="GDP per capita ", + ylab="Mortality rate (under 5)",cex=.5)
on the standard scale. If we compare the last two predictions, we have
with in blue is the log model, and in red is the log-log model (I did not include the first one for obvious reasons).