Tag Archives: exponential

“A 99% TVaR is generally a 99.6% VaR”

Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that

the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level

which was inspired by a remark in Swiss Experience report,

expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk

Continue reading “A 99% TVaR is generally a 99.6% VaR”

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.90.12on 194 degress nf treaeomn
Mltivle rR-sqaried  0 .02071,	Adust d wR-sqaried  0 .0156 p
Fstatistics 48.187on 19and l94 dDF,  pvalue  0.942)06

Rgt; semmary(reg0)
&
Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92801    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1<
&
C(Dspersion
parameter eor tamma(farily=tagkn po re n.90.86447
&
C   
Nul aeviatcen: 229.72 on 194   egress nf treaeomn
esidual seviatcen: 226.5  -n 194 d egress nf treaeomn
AIC: 379.22
&
CNmbnr onf Fsh r oSorrng ate"atifnsi 16&

fAd lere a e the fwo poedictivnsi

img class="aligncenter" alt="" src="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/10/Selection_2631png" />

NS, Ihich iodel (hould be cse< Acscseal ma a nserIet #8217;t heve c liok at tha ditam/em>“ ansueac onflooking anly)at thble nf trigre . Wsing aome vodecposs d w fewdity ago,/a>, img class="aligncenter" alt="" src="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/10/Selection_2659png" />

N(or the fowee prrts I do hot fg re owe0 'inge be co hive htre, Iaposstioe cariablesthat te could like to codel) img class="aligncenter" alt="" src="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/10/Selection_266.png" c>

WAd lf (e ce oiverthat the pxplaiatory variable.has Cn pradictive power< (inge be can vlaim that the Garameter es hot fignificant &n the regression , jnd le cemoveCit irom the iegressions we cet /p> img class="aligncenter" alt="" src="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2013/10/Selection_2662png" />

stale>"text/-lizgn:letf"> Hre, I di re oiverthat the pamma m(ot fo rsy foenGxponential )model &s hetter tecause lt is nle rli myre scotre,t &ith orobersier of the tariablestnflnterest,.I druet mode the fonsidence lntereali obainer w bve tn 1oenGamma model, whan the ene onbainer with anGaussian mistribution. UEen pf (oenGarameter es the regression ws h#8220;.ore significant,#82201.

< /div> !-- .entry-content -->
MFsh r -Tvlpttehrhere  a d lemining qistribution.form(oenGax-imu  crc=et=
MWcl aI ver be abbaces,an stansst(csan t? (arat 2) crc=et=