This morning, after my course on extreme values, some students did show me a question they got from practicals they were suppose to work on, with undergraduate students :

To be more specific, they wanted some feedback about point B. Now, let’s make it clear : I have no idea what “**precision**” and “**variation**” could mean… But let’s try and see if we can get something usefull, that might help to understand the question. In order to illustrate, consider the following regression model,

> plot(cars,pch=19,col="black",cex=.8) > abline(lm(dist~speed,data=cars),lty=2)

Here is the summary table of the linear regression model

> summary(lm(dist~speed,data=cars)) Estimate Std. Error t value Pr(>|t|) (Intercept) -17.5791 6.7584 -2.601 0.0123 * speed 3.9324 0.4155 9.464 1.49e-12 ***

My first idea was that “**variation of the X’s**” should be related to the “variance” of the explanatory variable. But it is stupid. For instance, If we transform the explanatory variable, say with a multiplicative factor of 100, then the variance of X will be 10,000 times larger. And the regression will be the same

> cars100=cars > cars100$speed=100*cars$speed > plot(cars100,pch=19,col="black",cex=.8) > abline(lm(dist~speed,data=cars100),lty=2)

in the sense that

> summary(lm(dist~speed,data=cars10)) Estimate Std. Error t value Pr(>|t|) (Intercept) -17.57909 6.75844 -2.601 0.0123 * speed 0.39324 0.04155 9.464 1.49e-12 ***

And similarly, divide by 100. So, I guess using some affine transformation of the explanatory variable is clearly not the way we should get a variable with more “variability”. Let us try something else. And keep in mind the following quantities,

> var(cars$speed) [1] 27.95918 > sd(cars$speed)/mean(cars$speed) [1] 0.3433535

with the variance, and the coefficient of variation. Consider the following modified dataset,

> carsg=cars > carsg$speed[12]=8 > carsg$speed[23]=25 > carsg$speed[34]=24 > carsg$speed[39]=12

Four values were changed, here. Observe that, somehow, there is more variability

> var(carsg$speed) [1] 31.84694 > sd(carsg$speed)/mean(carsg$speed) [1] 0.3640845

But if we consider the output of the regression model, we get

> summary(lm(dist~speed,data=carsg)) Estimate Std. Error t value Pr(>|t|) (Intercept) -18.5681 5.3621 -3.463 0.00113 ** speed 3.9708 0.3254 12.201 2.55e-16 ***

It look like we got here a more precision on the slope, with a smaller variance, and a larger Student-t-value. But what if we consider the following transformation,

> carsg=cars > carsg$speed[11]=5 > carsg$speed[21]=25 > carsg$speed[31]=25 > carsg$speed[50]=7

Again, we have more variability here, on the explanatory variable,

> var(carsg$speed) [1] 32.9898 > sd(carsg$speed)/mean(carsg$speed) [1] 0.3754036

But this time,

> summary(lm(dist~speed,data=carsg)) Estimate Std. Error t value Pr(>|t|) (Intercept) -1.5078 8.0498 -0.187 0.852 speed 2.9077 0.4932 5.896 3.61e-07 ***

the estimator of the slope has more variance, and we have a smaller Student-t-value. So here, if we increase the “variability” of X, we get get… almost anything. The intuition about those two transformations is relatively simple. In the first case, I have moved observations that were far away from the regression line – but in the center of the distribution, and I put them closer to the regression line, but more on the border of the sample (to increase the variance)

(I would not call them *outliers* since *outliers* are defined as observations far away from the model, but on Y, not on X). In the second case, I did exactly the opposite.

I am not sure if I understood correctly this sentence. But it looks like it is incorrect. Since there is only *one* false statement here, I will go for this one. What do you think?