Following my previous post, I wanted to spend more time, on the time series with “global weather-related disaster losses as a proportion of global GDP” over the time period 1990-2016 that Roger Pilke sent me last night.
db=data.frame(year=1990:2016,
ratio=c(.23,.27,.32,.37,.22,.26,.29,.15,.40,.28,.14,.09,.24,.18,.29,.51,.13,.17,.25,.13,.21,.29,.25,.2,.15,.12,.12))
In my previous post, I spend some time explaining that we should provide some sort of ‘confidence interval’ when we try to predict a pattern. That was what we call ‘model uncertainty’. But there are two (important) issues that I did not mention. (1) it is a time series, so why not use techniques dedicated to time series objects ? (2) we do not really care actually about ‘model uncertainty’ (unless we want to assess if a decreasing trend is significant, or not), and we care more about real prediction uncertainty: in the next ten years, what could be the range for the this ratio, with some given probability (say 95%)? Could we say that with 95% chance the global weather-related disaster losses as a proportion of global GDP should be (each year) within 0 and 0.35 or 0 and 0.7?
A first idea might be to use exponential smoothing techniques (without a seasonal component here).
ratio=ts(db$ratio,start=1990,frequency=1)
plot(ratio,xlim=c(1990,2030))
hw=HoltWinters(ratio,gamma=FALSE)
phw=predict(hw,n.ahead=15,prediction.interval = TRUE)
plot(hw,phw,xlim=c(1990,2030))
polygon(c(2017:2031,rev(2017:2031)), c(phw[,2],rev(phw[,3])),border=NA,col=rgb(0,0,1,.2))
The decreasing trend is coming from the fact that exponential smoothing is here a linear regression, with weight exponentially decaying with time (the older, the smaller the weight). But we cannot use that prediction, since the ratio cannot (obviously) be negative. So why not consider, here, the logarithm of the ratio
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
hw=HoltWinters(log(ratio),gamma=FALSE)
phw=predict(hw,n.ahead=15,prediction.interval = TRUE)
abline(v=2016,lty=2,col="grey")
lines(2017:2031,exp(phw[,2]),col="blue")
lines(2017:2031,exp(phw[,3]),col="blue")
lines(c(1992:2016,2017:2031),c(exp(hw$fitted[,1]),exp(phw[,1])),col="red")
polygon(c(2017:2031,rev(2017:2031)),exp(c(phw[,2],rev(phw[,3]))),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
The confidence band is huge, here. What if we consider some ARIMA model here?
fit=auto.arima(ratio)
farma=forecast(fit,15)
farma=cbind(as.numeric(farma$fitted)[1:15],as.numeric(farma$lower[,1]),as.numeric(farma$upper[,1]),as.numeric(farma$lower[,2]),as.numeric(farma$upper[,2]))
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
lines(2017:2031,farma[,4],col="blue")
lines(2017:2031,farma[,5],col="blue")
lines(2017:2031,farma[,1],col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,4],rev(farma[,5])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
Here, there is an intercept, but no dynamics for the time series (which is considered, here, as a pure white noise). We get exactly the same if we consider the average value of the series
fit=lm(ratio~1,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
Here, we get back to my previous post, if we want to consider a possible trend (and not only an intercept)
fit=lm(ratio~year,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
Again, the confidence region is not based on inference related error, but on model uncertainty: we try to visualize where future observations might be with (say) 95% chance. Note we can also consider (why not?) a quadratic regression
fit=lm(ratio~poly(year,2),data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
I am usually not a huge fan of those polynomial regression, but recently, I’ve seen that a lot in economic papers (like “if it’s not linear, add a squared version of the explanatory variable”, which is a rather odd strategy, I’ll publish some posts on that issue this year).
Here again, it might be more clever to consider a logarithmic transformation of the ratio, to insure that the ratio remains positive
fit=lm(log(ratio)~year,data=db)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016,lty=2,col="grey")
ndb=data.frame(year=2017:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(exp(pf+s^2/2),exp(pf-1.96*s),exp(pf+1.96*s))
lines(2017:2031,farma[,2],col="blue")
lines(2017:2031,farma[,3],col="blue")
lines(1990:2031,c(exp(predict(fit)+s^2/2),farma[,1]),col="red")
polygon(c(2017:2031,rev(2017:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
Observe that future trend is mainly driven by the three latest observations, that were rather low (compared with older observations). What if we remove them?
dbna=db
db$ratio[25:27]=NA
fit=lm(ratio~1,data=dbna)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016-3,lty=2,col="grey")
ndb=data.frame(year=2014:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2014:2031,farma[,2],col="blue")
lines(2014:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit)[1:24],farma[,1]),col="red")
polygon(c(2014:2031,rev(2014:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
More funny, if we consider a quadratic regression, we obtain an increasing trend for the future
fit=lm(ratio~poly(year,2),data=dbna)
s=summary(fit)$sigma
plot(db$year,db$ratio,type="l",xlim=c(1990,2030),ylim=c(-.2,.7),xlab="year",ylab="ratio")
abline(v=2016-3,lty=2,col="grey")
ndb=data.frame(year=2014:2031)
pf=predict(fit,newdata=ndb)
farma=cbind(pf,pf-1.96*s,pf+1.96*s)
lines(2014:2031,farma[,2],col="blue")
lines(2014:2031,farma[,3],col="blue")
lines(1990:2031,c(predict(fit)[1:24],farma[,1]),col="red")
polygon(c(2014:2031,rev(2014:2031)),c(farma[,2],rev(farma[,3])),border=NA,col=rgb(0,0,1,.2))
abline(h=0,lty=2)
As we can see, it is rather difficult to get relevant prediction for the future, based on 25 observations…. If anyone has a suggestion, comments are open…
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (January 2, 2017). Forecasting Natural Catastrophes (is rather difficult). Freakonometrics. Retrieved September 17, 2024 from https://doi.org/10.58079/ov6e
To me, the fundamental issue of “if present trends continue” is the extent to which the trends are present. In a ratio case like this, that should inform you choice of model for prediction purposes. We have GDP, and while individual country’s estimates can be wildly optimistic in long term projections, we’ve got a trend for that component we could argue about how much it correlates to year alone. It’s predictive error feeds into the model. We also have the number of damaging events, which is complex enough to be considered random but has it’s own general trend. Taking a longer view it doesn’t look particularly straight line linear with respect to year. However the variation is large in relation to the size of the data, making this component responsible for most of the variation in the ratio. And then there is the fairly complex interplay between the two variables- the degree to which areas that have damaging events intersect with areas with concentrations of GDP, which is not straight line linear as both have regional components. So on the basis of all that, my thoughts are I wouldn’t actually be trying to conclude much on the basis of year.
But, if I was, I would also be inclined to check the weight that both ends was contributing to the slope, and see what happens with knocking out the first few instead of the last few. And then I’d start to worry about the possibility that if there is any non-zero slope it might be dependent over lengths that mean I should be treating it as individual years.
opps, meant to write ” I should be not treating it as individual years”
Two suggestions, let me know if they are at all relevant:
1. TBATS or BATS instead of Holt-Winters
2. Bootstrapped prediction intervals
Hi, There are two components here:
1) weather disaster losses and
2) global GDP,
which should be treated separately. It is arguably far easier to anticipate global GDP than disaster losses. Global GDP is widely expected to increase at some rate between 2-4% annually. Thus, for weather disasters not to decrease, then they must increase at an annual rate > GDP growth rate.
To understand trends in global disaster costs requires dis-aggregating that metric. (For instance, historically it has been 60% comprised of US hurricane losses.) We do that dis-aggregation here–> http://ascelibrary.org/doi/abs/10.1061/(ASCE)NH.1527-6996.0000141
Then, consider each phenomena in the context of vulnerability and you will find a huge range of possible expected futures. For instance, we document a 900% difference across catastrophe models used for predicting annual US hurricane losses here–> http://journals.sagepub.com/doi/abs/10.1177/0162243916671201
What I take from this:
1. Don’t use past trends in weather losses/Global GDP as in any way being predictive of the future. It tells us with some certainty what happened in the past.
2. The range of possible disaster futures encompasses every one of the possible statistical models (eg, such as those displayed above).
3. Effective decision making in this setting will come not from accurate predictions, but understanding the past and making choices robust to uncertainties;
Bottom line: we have more control over future losses in how we make decisions today than we do in trying to predict and respond.
Thanks
I think the first step should be the definition of the purpose – why do we use this variable – is it more descriptive, is it an input to a risk model, do we want or need to predict next year’s likely result?
Can we enrich the data set with reasonable effort, e. g. do we have the data in a higher frequency (but then we have to control for the seasonal effects).
Can we make more reasonable predictions about the underlying time series (GDP and NatCat Losses) and calculate the ratio from that?
Considering this last question alone should make one question the validity of this kpi as a predictor, as GDP is relatively stable, and one can be fairly confident in e. g. published projected levels. On the other hand, NatCat has a high variability between each year, with possibly some underlying multi-year trends (from climate change or multi-year trends like el nino). One example is the atlantic ocean hurricane season, where damages will depend on the number of events, as well as the path of the hurricanes.
FYI, here is the paper Prietke seems to originally have used as a source: http://www.lse.ac.uk/geographyAndEnvironment/whosWho/staff%20profiles/neumayer/pdf/Natdis_norm.pdf
Here is our paper on this time series:
http://ascelibrary.org/doi/abs/10.1061/(ASCE)NH.1527-6996.0000141
Here is a short book that discusses these issues in some depth:
https://www.amazon.com/Rightful-Place-Science-Disasters-Climate/dp/0692297510
Just FYI.
Sorry, Pielke, not Prietke