# Tests, Power and Significance

In the mathematical statistics course today, we started talking about tests, and decision rules. To illustrate all the concepts introduced today, we considered the case where we have a sample $\boldsymbol{x}=\{x_1,\cdots,x_n\}$ with $U_i\sim\mathcal{U}([0,\theta])$. And we want to test

$H_0:\theta\leq \theta_0$  against $H_1:\theta> \theta_0$

In the course, we’ve seen that we could use a test based on the order statistics $x_{n:n}=\max\{x_i\}$.  The test would be

$\psi(\boldsymbol{x})=\boldsymbol{1}_{(c\ ;\ +\infty)}(x_{n:n})$

i.e. if $\psi(\boldsymbol{x})=1$ we choose $H_1$, and if $\psi(\boldsymbol{x})=0$, we choose $H_0$.

From the definition of the first order risk,

$\alpha=\sup_{\theta\in\Theta_0}\left\lbrace{\mathbb E}_{\theta}\lbrack \psi(\boldsymbol{X})\rbrack\right\rbrace={\mathbb E}_{\theta_0}\lbrack \psi(\boldsymbol{X})\rbrack$

we can easily get that

$c=\theta_0\cdot(1-\alpha)^{\frac{1}{n}}$

Thus, the power is then

$p(\theta)=\left(1-\left(\frac{\theta_0}{\theta}\right)^n(1-\alpha)\right)\boldsymbol{1}_{(c\ ;\ +\infty)}(\theta)$

To visualize it, use the following parameters

n=5
alpha=.1
theta0=1

Then

C1=theta0*(1-alpha)^(1/n)
theta=seq(0,2,by=.01)
P1=(1-(theta0/theta)^n*(1-alpha))*(theta>C1)
plot(theta,P1,type="l",lwd=2,col="blue",xlab="",ylab="Power")

Note that, so far, we did never consider the maximum of our sample. Assume that the maximum is $x_{n:n}$, then we can compute the $p$-value,

$p=\mathbb{P}(X_{n:n}>x_{n:n})=1-\left(\frac{x_{n:n}}{\theta_0}\right)^n$

Here it is

PV=(1-theta^n)*(theta<=1)
plot(theta,PV,type="l",lwd=2,col="blue",xlab="",ylab="p-value")

Now, why not consider another test, based on the minimum (since we have the distribution of the minimum of a sample from a uniform distribution). The test is the same as before

$\psi(\boldsymbol{x})=\boldsymbol{1}_{(c\ ;\ +\infty)}(x_{1:n})$

but here, the threshold is

$c=\theta_0\cdot (1-\alpha^{\frac{1}{n}})$

The power of the test is here

$\quad p(\theta)=\left(1-\frac{\theta_0}{\theta}(1-\alpha^{\frac{1}{n}})\right)^n \boldsymbol{1}_{( c\ ;\ +\infty)}(\theta)$

This test has the same significance level (by construction), but the power of the test is clearly lower than the one we got using the maximum of our sample, when $\theta\in\Theta_1$

C2=theta0*(1-alpha^(1/n))
P2=(1-(theta0/theta)*(1-alpha^(1/n)))^n*(theta>C2)
lines(theta,P2,type="l",lwd=2,col="red")

Why not consider a test based on $\overline{x}$? The problem is that we need the distribution (more specifically the survival distribution) of $\overline{X}$. We can compute it, numerically. But that might be painful. An alternative is to consider some approximation, based on the central limit theorem, i.e.

$2\overline{X}\sim\mathcal{N}\left(\theta,2^2 \frac{\theta^2}{12n}\right)$

Our test is based on $\psi(\boldsymbol{x})=\boldsymbol{1}_{(c\ ;\ +\infty)}(2\overline{x})$, and to get the same significance as before, use

$c=\Phi_{\mu,\sigma^2}^{-1}(1-\alpha)=\mu+\sigma \Phi^{-1}(1-\alpha)$

The power of the test is then

$p(\theta)=1-\Phi_{\theta,\sigma^2}(c)\cdot \boldsymbol{1}_{(c,+\infty)}(\theta)$

Here it is

mu=2*(theta0/2)
s2=2^2*(theta0^2/12)/n
C3=qnorm(1-alpha,mu,sqrt(s2))
(P=1-pnorm(C3,theta,sqrt(s2)))*(theta>C3)
lines(theta,P)

Observe here that the test based on the maximum is not more powerful than the one based on the average (I just wonder if it could be due to the Gaussian approximation…).

# Tukey and Mosteller’s Bulging Rule (and Ladder of Powers)

When discussing transformations in regression models, I usually briefly introduce the Box-Cox transform (see e.g. an old post on that topic) and I also mention local regressions and nonparametric estimators (see e.g. another post). But while I was working on my ACT6420 course (on predictive modeling, which is a VEE for the SOA), I read something about a “Ladder of Powers Rule” also called “Tukey and Mosteller’s Bulging Rule“. To be honest, I never heard about this rule before. But that won’t be the first time I learn something while working on my notes for a course !

The point here is that, in a standard linear regression model, we have

$Y_i=\beta_0+\beta_1 X_i+\varepsilon_i$

But sometimes, a linear relationship is not appropriate. One idea can be to transform the variable we would like to model, $Y$, and to consider

$\varphi(Y_i)=\beta_0+\beta_1 X_i+\varepsilon_i$

This is what we usually do with the Box-Cox transform. Another idea can be to transform the explanatory variable, $X$, and now, consider,

$Y_i=\beta_0+\beta_1 \psi(X_i)+\varepsilon_i$

For instance, this year in the course, we considered – at some point – a continuous piecewise linear functions,

$Y_i=\beta_0+\beta_1 X_i+\beta_{2,1} (X_i-s_1)_++\beta_{2,2} (X_i-s_2)_++\beta_{2,3} (X_i-s_3)_++\varepsilon_i$

It is also possible to consider some polynomial regression. The “Tukey and Mosteller’s Bulging Rule” is based on the following figure.

and the idea is that it might be interesting to transform $X$ and $Y$ at the same time, using some power functions. To be more specific, we will consider some linear model

$Y_{\color{black}{i}}^{\color{red}{q}}=\beta_0+\beta_1 X_{\color{black}{i}}^{\color{red}{p}}+\varepsilon_i$

for some (positive) parameters $p$ and $q$. Depending on the shape of the regression function (the four curves mentioned on the graph above, in the four quadrant) different powers will be considered.

To be more specific, let us generate different models, and let us look at the associate scatterplot,

> fakedataMT=function(p=1,q=1,n=99,s=.1){
+ set.seed(1)
+ X=seq(1/(n+1),1-1/(n+1),length=n)
+ Y=(5+2*X^p+rnorm(n,sd=s))^(1/q)
+ return(data.frame(x=X,y=Y))}
> par(mfrow=c(2,2))
> plot(fakedataMT(p=.5,q=2),main="(p=1/2,q=2)")
> plot(fakedataMT(p=3,q=-5),main="(p=3,q=-5)")
> plot(fakedataMT(p=.5,q=-1),main="(p=1/2,q=-1)")
> plot(fakedataMT(p=3,q=5),main="(p=3,q=5)")

If we consider the South-West part of the graph, to get such a pattern, we can consider

$Y_i^{1/2}=\beta_0+\beta_1 X_i^2+\varepsilon_i$

or more generally

$Y_i^{1/a}=\beta_0+\beta_1 X_i^b+\varepsilon_i$

where $a$ and $b$ are both larger than 1. And the larger $a$ and/or $b$, the more convex the regression curve.

Let us visualize that double transformation on a dataset, say the cars dataset.

> base=cars
­> names(base)=c("x","y")
> MostellerTukey=function(p=1,q=1){
+ regpq=lm(I(y^q)~I(x^p),data=base)
+ u=seq(min(min(base$x)-2,.1),max(base$x)+2,length=501)
+ par(mfrow=c(1,2))
+ plot(base$x,base$y,xlab="X",ylab="Y",col="white")
+ vic=predict(regpq,newdata=data.frame(x=u),interval="prediction")
+ vic[vic<=0]=.1
+ polygon(c(u,rev(u)),c(vic[,2],rev(vic[,3]))^(1/q),col="light blue",density=40,border=NA)
+ lines(u,vic[,2]^(1/q),col="blue")
+ lines(u,vic[,3]^(1/q),col="blue")
+ v=predict(regpq,newdata=data.frame(x=u))^(1/q)
+ lines(u,v,col="blue")
+ points(base$x,base$y)
+
+ plot(base$x^p,base$y^q,xlab=paste("X^",p,sep=""),ylab=paste("Y^",q,sep=""),col="white")
+ polygon(c(u,rev(u))^p,c(vic[,2],rev(vic[,3])),col="light blue",density=40,border=NA)
+ lines(u^p,vic[,2],col="blue")
+ lines(u^p,vic[,3],col="blue")
+ abline(regpq,col="blue")
+ points(base$x^p,base$y^q)
+ }

For instance, if we call

> MostellerTukey(2,1)

we get the following graph,

On the left, we have the original dataset, $\{(X_i,Y_i)\}$ and on the right, the transformed one, $\{(X_{\color{black}{i}}^{\color{red}{p}},Y_{\color{black}{i}}^{\color{red}{q}})\}$, with two possible transformations. Here, we did only consider the square of the speed of the car (and only one component was transformed, here). On that transformed dataset, we run a standard linear regression. We add, here, a confidence tube. And then, we consider the inverse transformation of the prediction. This line is plotted on the left. The problem is that it should not be considered as our optimal prediction, since it is clearly biased because $[\mathbb{E}(Y^{\color{red}{p}})]^{\color{red}{1/p}}\neq\mathbb{E}(Y)$. But quantiles associated with a monotone transformation are the transformed quantiles. So confidence tubes can still be considered as confidence tubes.

Note that here, it could have be possible to consider another transformation, with the same shape, but quite different

> MostellerTukey(1,.5)

Of course, there is no reason to consider a simple power function, and the Box-Cox transform can also be used. The interesting point is that the logarithm can be obtained as a particular case. Furthermore, it is also possible to seek optimal transformations, seen here as a pair of parameters. Consider

> p=.1
> bc=boxcox(y~I(x^p),data=base,lambda=seq(.1,3,by=.1))$y > for(p in seq(.2,3,by=.1)) bc=cbind(bc,boxcox(y~I(x^p),data=base,lambda=seq(.1,3,by=.1))$y)
> vp=boxcox(y~I(x^p),data=base,lambda=seq(.1,3,by=.1))$x > vq=seq(.1,3,by=.1) > library(RColorBrewer) > blues=colorRampPalette(brewer.pal(9,"Blues"))(100) > image(vp,vq,bc,col=blues) > contour(vp,vq,bc,levels=seq(-60,-40,by=1),col="white",add=TRUE) The darker, the better (here the log-likelihood is considered). The optimal pair is here > bc=function(a){p=a[1];q=a[2]; as.numeric(-boxcox(y~I(x^p),data=base,lambda=q)$y[50])}
> optim(c(1,1), bc,method="L-BFGS-B",lower=c(0,0),upper=c(3,3))
$par [1] 0.5758362 0.3541601$value
[1] 47.27395

and indeed, the model we get is not bad,

Fun, ins’t it?

# Reserving with negative increments in triangles

A few months ago, I did published a post on negative values in triangles, and how to deal with them, when using a Poisson regression (the post was published in French). The idea was to use a translation technique:

1. Fit a model not on $Y_i$‘s but on $Y_i^{(k)}=Y_i+k$, for some $k\geq 0$,
2. Use that model to make predictions, and then translate those predictions, $\widehat{Y}_i^{(k)}-k$

This is what was done to get the following graph, where a Poisson regression was fitted. Black points are $Y_i$‘s while blue points are $\widehat{Y}_i^{(k)}$‘s, for some $k\geq 0$. We fit a model to get the blue prediction, and then translate it to get the red prediction (on the $Y_i$‘s).

In this example, there were no negative values, but it is possible to use it get a better understanding on the impact of this technique. The prediction, here, is the red line. And clearly, the value of $k$ has an impact on the prediction (since we do not consider, here, a linear model: with a linear model, translating has not impact at all, except on the intercept).

The alternative mentioned in the previous post was to use this technique on several $k$‘s, and them interpolate

1. For a given $k$, fit a model not on $Y_i$‘s but on $Y_i^{(k)}=Y_i+k$, use that model to make predictions, and then translate those predictions, $\widehat{Y}_i^{(k)}-k$.
2. Do it for several $k$‘s.
3. Use it to extrapolate when $k$ is $0$ (which is the case we are interested in).

In the context of loss reserving, the idea is extremely simple. Consider a triangle with incremental payments

> source("https://perso.univ-rennes1.fr/arthur.charpentier/bases.R")
> Y=T=PAID
> n=ncol(T)
> Y[,2:n]=T[,2:n]-T[,1:(n-1)]
> Y
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

Now, we do not have negative values, here, but we can still see is translation techniques can be used. The benchmark is the Poisson regression, since we can run it :

> y=as.vector(as.matrix(Y))
> base=data.frame(y,ai=rep(2000:2005,n),bj=rep(0:(n-1),each=n))
> reg=glm(y~as.factor(ai)+as.factor(bj),data=base,family=poisson)

Here, the amount is reserve is the sum of predicted values in the lower part of the triangle,

> py=predict(reg,newdata=base,type="response")
> sum(py[is.na(base$y)]) [1] 2426.985 which is exactly Chain Ladder’s estimate. Now, let us use a translation technique to compute the amount of reserves. The code will be > decal=function(k){ + reg=glm(y+k~as.factor(ai)+as.factor(bj),data=base,family=poisson) + py=predict(reg,newdata=base,type="response") + return(sum(py[is.na(base$y)]-k))

For instance, if we translate of +5, we would get

> decal(5)
[1] 2454.713

while a translation of +10 would return

> decal(10)
[1] 2482.29

Clearly, translations do have an impact on the estimation. Here, just to check, if we do not translate, we do have Chain Ladder’s estimate,

> decal(0)
[1] 2426.985

The idea mentioned in the previous post was to try several translations, and then extrapolate, to get the value in 0. Here, translations will give the following estimates

> K=10:20
> (V=Vectorize(decal)(K))
[1] 2482.290 2487.788 2493.279 2498.765 2504.245 2509.719 2515.187 2520.649
[9] 2526.106 2531.557 2537.001

We can plot those values, and run a regression

> plot(K,V,xlim=c(0,20),ylim=c(2425,2540))
> abline(h=decal(0),col="red",lty=2)

the dotted horizontal line is Chain Ladder. Now, let us extrapolate

> b=data.frame(K=K,D=V)
> rk=lm(D~K,data=b)
> predict(rk,newdata=data.frame(K=0))
1
2427.623

On has to admit that it is not that bad. But yesterday evening, Karim asked me why I did use a linear regression, for my extrapolation. And to be honest, I do not know. I mean, the only answer might be that points are almost on a straight line. So the first time I saw it, I was exited, and I ran a linear regression.

Now, let us see if we can do better. Because here, we do use a translation of +10 or +20 (which might be rather small). What if we use much larger values ? (because we might have large negative incremental values). With the following code, we try, each time 11 consecutive values, the smallest one going from 0 to 50,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }
> plot(hausse,res,type="l",col="red",ylim=c(2422,2440))
> abline(rk,col="blue")

Here, we compute reserves when extrapolations were done after 11 translations, from $k$ to $k+10$.  With different values of $k$. The case where $k$ is ten was the one mentioned above,

> res[hausse==10]
[1] 2427.623

Actually, it might also be possible to consider not 11 translations, but 26, from $k$ to $k+25$. Here, we get

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(25+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=lm(D~K,data=b)
+ res[k]=predict(rk,newdata=data.frame(K=0))
+ }
> lines(hausse,res,type="l",col="blue",lty=2)

We now have the dotted line

Here, it is getting worst. So let us keep here 11 translations. Perhaps, we can try something different. For instance a Poisson regression, with a log like (i.e. we consider an exponential extrapolation),

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ rk=glm(D~K,data=b,family=poisson)
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }
> lines(hausse,res,type="l",col="purple")

The purple line will be a Poisson model, with a log link. Perhaps we can try another link function, like a quadratic one

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ power(lambda = 2)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }
> lines(hausse,res,type="l",col="orange")

That would be the orange line,

Here, we need a link function between identity (the linear model, the blue line) and the quadratic one (the orange one), for instance a power function 3/2,

> hausse=1:50; res=rep(NA,50)
> for(k in hausse){
+ VK=k:(10+k)
+ b=data.frame(K=VK,D=Vectorize(decal)(VK))
+ power(lambda = 1.5)))
+ res[k]=predict(rk,newdata=data.frame(K=0),type="response")
+ }
> lines(hausse,res,type="l",col="green")

Here, it looks like we can use that model for any kind of translation, from +10 till +50, even +100 ! But I do not have any intuition about the use of this power function…

# Rationality, and MS Excel (and other calculators)

This morning, Mathieu had a nice experience in his course on computational method in actuarial science. But let us start with some mathematical formal definitions.

First, recall that $y^x$ is – somehow – a standard expression. No one should be surprised to see such an expression. Generally (as explained in http://en.wikipedia.org/… ), this function is defined only when $y\in\mathbb{R}_+$. The idea is that the definition of $y^x$ is that

$y^x = \exp\left(x\log[y]\right)$

And it is a definition. Such a function exists only if $y\in\mathbb{R}_+$ (maybe excluding $0$). This would be a standard definition in real-analysis.

Now, this ‘power’ function appears also in complex analysis, when dealing with unit roots. From instance, if  $z=y^{\frac{1}{k}}e^{i \frac{2n\pi}{k}}$, where $y\in\mathbb{R}_+$ and $k\in\mathbb{N}_\star$, for some $n\in\mathbb{N}$, then $z^k=y$. Thus, in complex-analysis it might be more complex to define properly $y^x$ since it might not be unique. But we can relate (sometimes, when $x$ is the inverse of an integer, or maybe a rational number ?) with roots of polynomial functions. So far, nothing new…

Let us get back to Mathieu’s problem. Actually, in his course, he wanted to compute $(-8)^{\frac{1}{3}}$. With a French version of Excel, entering

you do get $-2$. If you look at the ‘help’ window, you have some more details

It looks like this hat function can be used to define objects such as $y^x$. But with

you get

(meaning that this is a problem…). It is also possible to use the power (puissance in French) function of Excel,

Here, you also get

The weird part here is that, in the ‘help’ window, you can read that this power function can be used with any number in $\mathbb{R}$.

Another point… what about $(-8)^{\frac{2}{3}}$ ? Somehow, it is just the square of the previous one (with the fraction)… Here, typing

you get

(similarly with the power function). So clearly, it is not that simple to use this power function. Now, if you use Google (which is now my new online calculator when I am in class, when I cannot use R), if the power is a fraction (or to be more specific the inverse of an integer), then it works as Excel

you get

But if you type (which should be close, from a continuity property of the power function)

you get

and similarly

On Wolfram Mathworld, enter

Mathematica does recognize that we try to deal with unit roots: the result is here

with – as expected – a numerical approximation

With Matlab, Mathieu did obtain the same as Mathematica (its decimal approximation). And to conclude, with R, Mathieu did obtain

> (-8)^(1/3)
[1] NaN
> (-8)^(.333333333333333)
[1] NaN

So for R, you cannot use this hat function on negative numbers.

Now, how can we interpret those outputs ?

1) My understanding is that clearly, with MS Excel, $x^{ab}\neq \left(x^a\right)^b$since

$(-8)^{\frac{2}{3}}\neq \left((-8)^{\frac{1}{3}}\right)^2$

which is problematic. For instance, in insurance, with monthly discounts, we do have functions like $u^{\frac{k}{12}}$. What if

$u^{\frac{k}{12}}\neq \left(u^{\frac{1}{12}}\right)^k$

2) The problem comes – probably (MS Excel is not an open software, so it might be hard to check) –  from the fact that $y^{\frac{1}{n}}$ is interpreted as an inverse of a (possibly) bijective function. To be more specific, $x=y^{\frac{1}{n}}$ means that $x^n=y$. When $n$ is an odd integer, then (in real-analysis) there is a unique inverse, and thus, $y^{\frac{1}{n}}$ is uniquely defined, since $x\mapsto x^n$ is a bijective $\mathbb{R}\rightarrow\mathbb{R}$ function. This is what MS Excel (and Google) is doing: $x\mapsto x^3$ is a bijective $\mathbb{R}\rightarrow\mathbb{R}$ function, so $(-8)^{\frac{1}{3}}$ means that we need to find the unique (real) value $x$ such that $x^3=-8$. Thus, somehow, it makes sense to return $-2$.

3) There is still a problem with Google, and Mathematica. That is fine to return unit roots in $\mathbb{C}$. But how comes there is only one value ? I mean, yes $1+\sqrt{3} \ i$ is a possible answer, since

$(1+\sqrt{3} \ i)^3=-8$

but one can also observe that , and similarly, $(-2)^3=-8$ and

$(1-\sqrt{3} \ i)^3=-8$

One can check with

With R, since we do not deal with power function here, but with roots, if we want to find $x$ such that $x^3=-8$, the function is

> polyroot(c(8,0,0,1))
[1]  1+1.732051i -2+0.000000i  1-1.732051i

Which is different… Weird isn’t it ?