In most of the scenarios that talk about climate change, we are told about projections for 2050 or even 2100, time scales that are so far away that we have the illusion that the major risks will only be for “future generations“. And these scenarios evoke the possibility of a rise of 1, 2 or 4°C in several decades, a figure that should seem derisory when we are used to seeing temperatures vary by 10 or 20°C within the same day, by 15, 20 or even 30°C between winter and summer. In this context, how can we finally think seriously about climate risk? Continue reading Climate risk, some slow long-term trend?
Tag Archives: climate
Workshop on Impacts of Climate Change on Economics, Finance, and Insurance
Next week, I will be at the Fields Institute in Toronto, for a workshop on Impacts of Climate Change on Economics, Finance, and Insurance. The slides of my talks are now online. I will briefly get back on three papers, about insurance of natural catastrophes, starting with Insurance against Natural Catastrophes: Balancing Actuarial Fairness and Social Solidarity, written with Molly James and Laurence Barry, and Predicting Drought and Subsidence Risks in France, written with Molly James and Hani Ali, and finally, I will get back on a more recent paper, Government Intervention in Catastrophe Insurance Markets: A Reinforcement Learning Approach written with Menna Hassan and Nourhan Sakr.
Talk on climate models and insurance
Tomorrow morning, at 7 am, I will give a talk at the Actuarial Conference # 62 on climate models and insurance, getting back on two recent paper.
- Flood, French’s Nat Cat System and Fairness
The first paper is Insurance against NaturalCatastrophes: Balancing Actuarial Fairness and Social Solidarity with Laurence Barry (PARI) and Molly James (EURIA)
Based on official risk areas (PPRL and PPRI)
we will investigate the prices of houses and apartments
and discuss connections between risk and wealth.
- Subsidence and predictions
The second paper is Predicting Drought and SubsidenceRisks in France with Hani Ali (Willis Re) and Molly James (EURIA)
We did try several models to predict subsidence frequency
GLMs and also random forests
Then we got predictions for frequency, in 2017
and 2018
Then, we’ve been able to derive some risk maps
Here are predictions for costs for 2017
for 2019
and for 2020
I still wonder how to take into account climate change in this approach (except that we are more and more likely to be in the upper left corner – hot and dry)
- Extensions (wildfires in Québec and RL)
Finally, I will (very briefly) discuss two recent works, the first one with Amirouche Benchallal (UQAM) and Yacine Bouroubi (Sherbrooke) on wildfire in Québec
and the second one, with Nouri Sakr (Columbia) and Mennatalla Mohamed Hassan (AmericanUniversity in Cairo) on government intervention in the context of natural catastrophes.
Granularity Issues on Climatic Time Series
At the Big Data & Environment Workshop, I will give a talk on Granularity Issues on Climatic Time Series. Slides are now online.
On NCDF Climate Datasets
Mid november, a nice workshop on big data and environment will be organized, in Argentina,
We will talk a lot about climate models, and I wanted to play a little bit with those data, stored on http://dods.ipsl.jussieu.fr/mc2ipsl/.
Climate change and insurance
I will be in Lyon next Monday to give a talk on “Modeling heat-waves: return period for non-stationary extremes” in a workshop entitled “Changement climatique et gestion des risques“. An interesting reference might be some pages from Le Monde (2010). The talk will be more a discussion about modeling series of temperatures (daily temperatures). A starting point might be the IPCC Third Assessment graph which illustrates the effect on extreme temperatures when (a) the mean temperature increases, (b) the variance increases, and (c) when both the mean and variance increase for a normal distribution of temperature.
I will add here some code used to generate some graphs I will comment. The graph below it the daily minimum temperature,
TEMP=read.table("http://freakonometrics.blog.free.fr/ public/data/TN_STAID000038.txt",header=TRUE,sep=",") D=as.Date(as.character(TEMP$DATE),"%Y%m%d") T=TEMP$TN/10 day=as.POSIXlt(D)$yday+1 an=trunc(TEMP$DATE/10000) plot(D,T,col="light blue",xlab="Minimum daily temperature in Paris",ylab="",cex=.5) abline(R,lwd=2,col="red")
We can clearly see an increasing linear trend. But we do not care (too much) here about the increase of the average temperature, but more dispersion, and tails. Here are decenal box-plots
or quantile-regressions
library(quantreg) PENTESTD=PENTE=rep(NA,99) for(i in 1:99){ R=rq(T~D,tau=i/100) PENTE[i]=R$coefficients[2] PENTESTD[i]=summary(R)$coefficients[2,2] } m=lm(T~D)$coefficients[2] plot((1:99)/100,(PENTE/m-1)*100,type="b") segments((1:99)/100,((PENTE-2*PENTESTD)/m-1)*100, (1:99)/100,((PENTE+2*PENTESTD)/m-1)*100, col="light blue",lwd=3) points((1:99)/100,(PENTE/m-1)*100,type="b") abline(h=0,lty=2,col="red")
In order to get a better understanding of the graph above, here are slopes of quantile regressions associated to different probabilities,
The annualized maxima (of minimum temperature, i.e. warmest night of the year)
i.e. the regression of yearly maximas.
tail index of a Generalized Pareto distribution
Instead of looking at observation over a century (the trend is obviously linear), we can focus on seaonal behavior,
B=data.frame(Y=rep(T,3),X=c(day,day-365,day+365), A=rep(an,3)) library(quantreg) library(splines) Q50=rq(Y~bs(X,10),data=B,tau=.5) Q95=rq(Y~bs(X,10),data=B,tau=.95) Q05=rq(Y~bs(X,10),data=B,tau=.05) YP95=predict(Q95,newdata=data.frame(X=1:366)) YP05=predict(Q05,newdata=data.frame(X=1:366)) I=(T>predict(Q95))|(T<predict(Q05)) YP50=predict(Q50,newdata=data.frame(X=1:366)) plot(day[I],T[I],col="light blue",cex=.5) lines(1:365,YP95[1:365],col="blue") lines(1:365,YP05[1:365],col="blue") lines(1:365,YP50[1:365],col="blue",lwd=3)
with on red series from 1900 till 1920, and on purple from 1990 till 2010. If we remove the linear trend, and the seasonal cycle, here are the residuals, assume to be stationary,
on during the year
Obviously, something has been missed,
The graph below is the volatility of the residual series, within the year,
Instead of looking at volatility, we can focus on tails, with tail index per month,
mois=as.POSIXlt(D)$mon+1 Pmax=Dmax=matrix(NA,12,2) for(s in 1:12){ X=T3[mois==s] FIT=gpd(X,5) Pmax[s,1:2]=FIT$par.ests Dmax[s,1:2]=FIT$par.ses } plot(1:12,Pmax[,1],type="b",col="blue", ylim=c(-.6,0)) segments(1:12,Pmax[,1]+2*Dmax[,1],1:12,Pmax[,1]- 2*Dmax[,1],col="light blue",lwd=2) points(1:12,Pmax[,1],col="blue") text(1:12,rep(-.5,12),c("JAN","FEV","MARS", "AVR","MAI","JUIN","JUIL","AOUT","SEPT", "OCT","NOV","DEV"),cex=.7)
At the end of the talk, I will also mention multiple city models, e.g. Paris and Marseille,
If we look at residuals (once we have removed the linear trend and the seasonal cycle) we observe some positive dependence
In order to study (strong) tail dependence, define

for lower left tail and



and

It looks like there is no tail dependence (in the uper tail). But it is also possible to study weaker tail dependence, through

and

Slides can be visualized below, I will upload them soon,
Circular or spherical data, and density estimation
I few years ago, while I was working on kernel based density estimation on compact support distribution (like copulas) I went through a series of papers on circular distributions. By that time, I thought it was something for mathematicians working on weird spaces…. but during the past weeks, I saw several potential applications of those estimators.
- circular data density estimation
Consider the density of an angle say, i.e. a function such that

with a circular relationship, i.e. . It can be seen as an invariance by rotation.
von Mises proposed a parametric model in 1918 (see here or there), assuming that

where is Bessel modified function of order 1,

(which is simply a normalization parameter). There are two parameters here, (some concentration parameter) and mu a direction.
From a series of observed angles, the maximum likelihood estimator for kappa is solution of

where

and

and where , where those functions are modified Bessel functions. Well, that estimator is biased, but it is possible to improve it (see here or there). This can be done easily in R (actually Jeff Gill – here – used that package in several applications). But I am not a big fan of that technique….
- density estimation for hours on simulated data
A nice application can be on the estimation of the daily density of a temporal events (e.g. phone calls as we’ll see later on, or email arrival time). Let is the time (in hours) for the
th observation (the
th phone call received). Then set

The time is now seen as an angle. It is possible to consider the equivalent of an histogram,
set.seed(1) library(circular) X=rbeta(100,shape1=2,shape2=4)*24 Omega=2*pi*X/24 Omegat=2*pi*trunc(X)/24 H=circular(Omega,type="angle",units="radians",rotation="clock") Ht=circular(Omegat,type="angle",units="radians",rotation="clock") plot(Ht, stack=FALSE, shrink=1.3, cex=1.03, axes=FALSE,tol=0.8,zero=c(rad(90)),bins=24,ylim=c(0,1)) points(Ht, rotation = "clock", zero =c(rad(90)), col = "1", cex=1.03, stack=TRUE ) rose.diag(Ht-pi/2,bins=24,shrink=0.33,xlim=c(-2,2),ylim=c(-2,2), axes=FALSE,prop=1.5)
or a kernel based estimation of the density (the gray line on the right).
circ.dens = density(Ht+3*pi/2,bw=20) plot(Ht, stack=TRUE, shrink=.35, cex=0, sep=0.0, axes=FALSE,tol=.8,zero=c(0),bins=24, xlim=c(-2,2),ylim=c(-2,2), ticks=TRUE, tcl=.075) lines(circ.dens, col="darkgrey", lwd=3) text(0,0.8,"24", cex=2); text(0,-0.8,"12",cex=2); text(0.8,0,"6",cex=2); text(-0.8,0,"18",cex=2)
The code looks rather simple. But I am not very comfortable using codes that I do not completely understand. So I did my own. The first step was to get a graph similar to the one we have on the right, except that I prefer my own kernel based estimator. The idea is that instead of estimating the density on
, we estimate it on the sample
. Then we multiply by 3 to get the density only on
. For the bandwidth, I took the same as the one that we would have taken on
The code is simply the following
U=seq(0,1,by=1/250) O=U*2*pi U12=seq(0,1,by=1/24) O12=U12*2*pi X=rbeta(100,shape1=2,shape2=4)*24 OM=2*pi*X/24 XL=c(X-24,X,X+24) d=density(X) d=density(XL,bw=d$bw,n=1500) I=which((d$x>=6)&(d$x<=30)) Od=d$x[I]/24*2*pi-pi/2 Dd=d$y[I]/max(d$y)+1 plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) lines(Dd*cos(Od),-Dd*sin(Od),col="red",lwd=1.5) text(.7,0,"6"); text(-.7,0,"18") text(0,-.7,"12"); text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2)
Note that it is possible to stress more (visually) on hours having few phone calls, or a lot (compared with an homogeneous Poisson process), e.g.
plot(cos(O),-sin(O),xlim=c(-2,2),ylim=c(-2,2), type="l",axes=FALSE,xlab="",ylab="") for(i in pi/12*(0:12)){ abline(a=0,b=tan(i),lty=1,col="light yellow")} segments(2*cos(O12),2*sin(O12),1.1*cos(O12),1.1*sin(O12), col="light grey") segments(.9*cos(O12),.9*sin(O12),1.1*cos(O12),1.1*sin(O12)) text(.7,0,"6") text(-.7,0,"18") text(0,-.7,"12") text(0,.7,"24") R=1/24/max(d$y)/3+1 lines(R*cos(O),R*sin(O),lty=2) AX=R*cos(Od);AY=-R*sin(Od) BX=Dd*cos(Od);BY=-Dd*sin(Od) COUL=rep("blue",length(AX)) COUL[R<Dd]="red" CM=cm.colors(200) a=trunc(100*Dd/R) COUL=CM[a] segments(AX,AY,BX,BY,col=COUL,lwd=2) lines(Dd*cos(Od),-Dd*sin(Od),lwd=2)
We get here those two graphs,
![]() |
![]() |
To be honest, I do not really like that representation – even if it looks nice. If we compare that circular representation to a more classical one (from 0:00 till 23:59 one the graph on the left, below), I do have a problem to interpret the areas in blue and pink.
density of wind direction
On the left, we compare two densities, so the area in pink is the same as the area in blue. But here, it is no longer the case: the area in pink is always larger to the one in blue. So it might help so see when we have a difference, but there is a scaling issue that we cannot discuss further… But less us see if we can use that estimation technique to several problems.
A standard application when studying angles is wind direction. For instance, in Montréal, it is possible to find hourly observations, starting in 1974 (we just need a R robot to pick up the information, but I’ll tell more about that in another post, someday). Here, we have directly an angle. So we can use a code rather similar to the one used above to estimate the distribution of wind direction in Montréal.
density of 911 phone calls
Note that our estimate is consistent with several graphs that can be found on meteorological websites (e.g. the one above on the right, that was found here).
In a recent post (here) I wanted to check about the “midnight crime” myth, using hours of 911 phone calls in Montréal.
That was for all phone calls. But if we look more specifically, for burglaries, we have the distribution on the left, and for conflicts the one on the right
We do clearly observe that gun shots occur a bit before midnight. See also here for another study, but this time in NYC (thanks @PAC for the link).while for gun shots, we have the distribution on the left, and for “troubles” (basically people making too much noisy in parties) or “noise” the one on the right
- density of earth temperatures, or earthquakes
Of course it is also possible to work in higher dimension. Before, we went from densities on to densities on the unit circle
. But similarly, it is possible to go from
to the unit sphere
. A nice application being global climate studies,
The idea being that point on the left above are extremely close to the one on the right. An application can be e.g. on earthquakes occurrence. Data can be found here.
library(ks) X=cbind(EQ$Longitude,EQ$Latitude) Hpi1 = Hpi(x = X) DX=kde(x = X, H = Hpi1) library(maps) map("world") plot(DX,add=TRUE,col="red") points(X,cex=.2,col="blue") Y=rbind(cbind(X[,1],X[,2]),cbind(X[,1]+360,X[,2]), cbind(X[,1]-360,X[,2]),cbind(X[,1],X[,2]+180), cbind(X[,1]+360,X[,2]+180),cbind(X[,1]-360,X[,2]+180), cbind(X[,1],X[,2]-180),cbind(X[,1]+360, X[,2]-180),cbind(X[,1]-360,X[,2]-180)) DY=kde(x = Y, H = Hpi1) library(maps) plot (DY,add=TRUE,col="purple")
Without any correction, we get the red level curves. The pink one integrates correction.
More climate extremes, or simply global warming ?
In the paper on the heat wave in Paris (mentioned here) I discussed changes in the distribution of temperature (and autocorrelation of the time series).
During the workshop on Statistical Methods for Meteorology and Climate Change today (here) I observed that it was still an important question: is climate change affecting only averages, or does it have an impact on extremes ? And since I’ve seen nice slides to illustrate that question, I decided to play again with my dataset to see what could be said about temperature in Paris.
Recall that data can be downloaded here (daily temperature of the XXth century).
tmaxparis=read.table("/temperature/TX_SOUID100124.txt", skip=20,sep=",",header=TRUE) Dmaxparis=as.Date(as.character(tmaxparis$DATE),"%Y%m%d") Tmaxparis=as.numeric(tmaxparis$TX)/10 tminparis=read.table("/temperature/TN_SOUID100123.txt", skip=20,sep=",",header=TRUE) Dminparis=as.Date(as.character(tminparis$DATE),"%Y%m%d") Tminparis=as.numeric(tminparis$TN)/10 Tminparis[Tminparis==-999.9]=NA Tmaxparis[Tmaxparis==-999.9]=NA annee=trunc(tminparis$DATE/10000) MIN=tapply(Tminparis,annee,min) plot(unique(annee),MIN,col="blue",ylim=c(-15,40),xlim=c(1900,2000)) abline(lm(MIN~unique(annee)),col="blue") abline(lm(Tminparis~unique(Dminparis)),col="blue",lty=2) annee=trunc(tmaxparis$DATE/10000) MAX=tapply(Tmaxparis,annee,max) points(unique(annee),MAX,col="red") abline(lm(MAX~unique(annee)),col="red") abline(lm(Tmaxparis~unique(Dmaxparis)),col="red",lty=2)
On the plot below, the dots in red are the annual maximum temperatures, while the dots in blue are the annual minimum temperature. The plain line is the regression line (based on the annual max/min), and the dotted lines represent the average maximum/minimum daily temperature (to illustrate the global tendency),
It is also possible to look at annual boxplot, and to focus either on minimas, or on maximas.
annee=trunc(tminparis$DATE/10000) boxplot(Tminparis~as.factor(annee),ylim=c(-15,10), xlab="Year",ylab="Temperature",col="blue") x=boxplot(Tminparis~as.factor(annee),plot=FALSE) xx=1:length(unique(annee)) points(xx,x$stats[1,],pch=19,col="blue") abline(lm(x$stats[1,]~xx),col="blue") annee=trunc(tmaxparis$DATE/10000) boxplot(Tmaxparis~as.factor(annee),ylim=c(15,40), xlab="Year",ylab="Temperature",col="red") x=boxplot(Tmaxparis~as.factor(annee),plot=FALSE) xx=1:length(unique(annee)) points(xx,x$stats[5,],pch=19,col="red") abline(lm(x$stats[5,]~xx),col="red")
Plain dots are average temperature below the 5% quantile for minima, or over the 95% quantile for maxima (again with the regression line),

We can observe an increasing trend on the minimas, but not on the maximas !
Finally, an alternative is to remember that we focus on annual maximas and minimas. Thus, Fisher and Tippett theory (mentioned here) can be used. Here, we fit a GEV distribution on a blog of 10 consecutive years. Recall that the GEV distribution is

install.packages("evir") library(evir) Pmin=Dmin=Pmax=Dmax=matrix(NA,10,3) for(s in 1:10){ X=MIN[1:10+(s-1)*10] FIT=gev(-X) Pmin[s,]=FIT$par.ests Dmin[s,]=FIT$par.ses X=MAX[1:10+(s-1)*10] FIT=gev(X) Pmax[s,]=FIT$par.ests Dmax[s,]=FIT$par.ses }
The location parameter is the following, with on the left the minimas and on the right the maximas,
while the scale parameter is
and finally the shape parameter is
On those graphs, it is very difficult to say anything regarding changes in temperature extremes… And I guess this is a reason why there is still active research on that area…
Insurance and reinsurance of natural catastrophes
Conférence Insurance and Adaptation to Climate Change, Paris, Mars 2007. The paper appeared in the Geneva Papers.
The IPCC 2007 report noted that both the frequency and strength of hurricanes, floods and droughts have increased during the past few years. Thus, climate risk, and more specifically natural catastrophes, are now hardly insurable: losses can be huge (and the actuarial pure premium might even be infinite), diversification through the central limit theorem is not possible because of geographical correlation (a lot of additional capital is required), there might exist no insurance market since the price asked by insurance companies can be much higher than the price householders are willing to pay (short-term horizon of policyholders), and, due to climate change, there is more uncertainty (and thus additional risk). The first idea we will discuss in this paper, about insurance markets and climate risks, is that insurance exists only if risk can be transferred, not only to reinsurance companies but also to capital markets (through securitization or catastrophes options). The second one is that climate is changing, and therefore, not only prices and capital required should be important, but also uncertainty can be very large. It is extremely difficult to insure in a changing environment.
The paper was presented in a conference, in Paris, in 2007