Tag Archives: multiple choice

Precision, with Imprecise Words

This morning, after my course on extreme values, some students did show me a question they got from practicals they were suppose to work on, with undergraduate students :

To be more specific, they wanted some feedback about point B. Now, let’s make it clear : I have no idea what “precision” and “variation” could mean… But let’s try and see if we can get something usefull, that might help to understand the question. In order to illustrate, consider the following regression model,

> plot(cars,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars),lty=2)

Here is the summary table of the linear regression model

> summary(lm(dist~speed,data=cars))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.5791     6.7584  -2.601   0.0123 *  
speed         3.9324     0.4155   9.464 1.49e-12 ***

My first idea was that “variation of the X’s” should be related to the “variance” of the explanatory variable. But it is stupid. For instance, If we transform the explanatory variable, say with a multiplicative factor of 100, then the variance of X will be 10,000 times larger. And the regression will be the same

> cars100=cars
> cars100$speed=100*cars$speed
> plot(cars100,pch=19,col="black",cex=.8)
> abline(lm(dist~speed,data=cars100),lty=2)

in the sense that

> summary(lm(dist~speed,data=cars10))

             Estimate Std. Error t value Pr(>|t|)    
(Intercept) -17.57909    6.75844  -2.601   0.0123 *  
speed         0.39324    0.04155   9.464 1.49e-12 ***

And similarly, divide by 100. So, I guess using some affine transformation of the explanatory variable is clearly not the way we should get a variable with more “variability”. Let us try something else. And keep in mind the following quantities,

> var(cars$speed)
[1] 27.95918
> sd(cars$speed)/mean(cars$speed)
[1] 0.3433535

with the variance, and the coefficient of variation. Consider the following modified dataset,

> carsg=cars
> carsg$speed[12]=8
> carsg$speed[23]=25
> carsg$speed[34]=24
> carsg$speed[39]=12

Four values were changed, here. Observe that, somehow, there is more variability

> var(carsg$speed)
[1] 31.84694
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3640845

But if we consider the output of the regression model, we get

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -18.5681     5.3621  -3.463  0.00113 ** 
speed         3.9708     0.3254  12.201 2.55e-16 ***

It look like we got here a more precision on the slope, with a smaller variance, and a larger Student-t-value. But what if we consider the following transformation,

> carsg=cars
> carsg$speed[11]=5
> carsg$speed[21]=25
> carsg$speed[31]=25
> carsg$speed[50]=7

Again, we have more variability here, on the explanatory variable,

> var(carsg$speed)
[1] 32.9898
> sd(carsg$speed)/mean(carsg$speed)
[1] 0.3754036

But this time,

> summary(lm(dist~speed,data=carsg))

            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -1.5078     8.0498  -0.187    0.852    
speed         2.9077     0.4932   5.896 3.61e-07 ***

the estimator of the slope has more variance, and we have a smaller Student-t-value. So here, if we increase the “variability” of X, we get get… almost anything. The intuition about those two transformations is relatively simple. In the first case, I have moved observations that were far away from the regression line – but in the center of the distribution, and I put them closer to the regression line, but more on the border of the sample (to increase the variance)

(I would not call them outliers since outliers are defined as observations far away from the model, but on Y, not on X). In the second case, I did exactly the opposite.

I am not sure if I understood correctly this sentence. But it looks like it is incorrect. Since there is only one false statement here, I will go for this one. What do you think?

Finding Waldo, a flag on the moon and multiple choice tests, with R

I have to admit, first, that finding Waldo has been a difficult task. And I did not succeed. Neither could I correctly spot his shirt (because actually, it was what I was looking for). You know, that red-and-white striped shirt. I guess it should have been possible to look for Waldo’s face (assuming that his face does not change) but I still have problems with size factor (and resolution issues too). The problem is not that simple. At thehttp://mlsp2009.conwiz.dk/ conference, a price was offered for writing an algorithm in Matlab. And one can even find Mathematica codes online. But most of the those algorithms are based on the idea that we look for similarities with Waldo’s face, as described in problem 3 on http://www1.cs.columbia.edu/~blake/‘s webpage. You can find papers on that problem, e.g. Friencly & Kwan (2009) (based on statistical techniques, but Waldo is here a pretext to discuss other issues actually), or more recently (but more complex) Garg et al. (2011) on matching people in images of crowds.

What about codes in R ? On http://stackoverflow.com/, some ideas can be found (and thank Robert Hijmans for his help on his package). So let us try here to do something, on our own. Consider the following picture,

With the following code (based on the following file) it is possible to import the picture, and to extract the colors (based on an RGB decomposition),

> library(raster)
> waldo=brick(system.file("DepartmentStoreW.grd",
+ package="raster"))
> waldo
class       : RasterBrick
dimensions  : 768, 1024, 786432, 3 (nrow,ncol,ncell,nlayer)
resolution  : 1, 1  (x, y)
extent      : 0, 1024, 0, 768  (xmin, xmax, ymin, ymax)
coord. ref. : NA
values      : C:\R\win-library\raster\DepartmentStoreW.grd
min values  : 0 0 0
max values  : 255 255 255

My strategy is simple: try to spot areas with white and red stripes (horizontal stripes). Note that here, I ran the code on a Windows machine, the package is not working well on Mac. In order to get a better understanding of what could be done, let us start with something much more simple. Like the picture below, with Waldo (and Waldo only). Here, it is possible to extract the three colors (red, green and blue),

> plot(waldo,useRaster=FALSE)

It is possible to extract the red zones (already on the graph above, since red is a primary color), as well as the white ones (green zones on the graphs means a white region on the picture, on the left)

# white component
white = min(waldo[[1]] , waldo[[2]] , waldo[[3]])>220
focalswhite = focal(white, w=3, fun=mean)
plot(focalswhite,useRaster=FALSE)

# red component
red = (waldo[[1]]>150)&(max(  waldo[[2]] , waldo[[3]])<90)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

i.e. here we have the graphs below, with the white regions, and the red ones,

From those two parts, it has been possible to extract the red-and-white stripes from the picture, i.e. some regions that were red above, and white below (or the reverse),

# striped component
striped = red; n=length(values(striped)); h=5
values(striped)=0
values(striped)[(h+1):(n-h)]=(values(red)[1:(n-2*h)]==
TRUE)&(values(red)[(2*h+1):n]==TRUE)
focalsstriped = focal(striped, w=3, fun=mean)
plot(focalsstriped,useRaster=FALSE)

So here, we can easily spot Waldo, i.e. the guy with the red-white stripes (with two different sets of thresholds for the RGB decomposition)

Let us try somthing slightly more complicated, with a zoom on the large picture of the department store (since, to be honest, I know where Waldo is…).

Here again, we can spot the white part (on the left) and the red one (on the right), with some thresholds for the RGB decomposition

Note that we can try to be (much) more selective, playing with threshold. Here, it is not very convincing: I cannot clearly identify the region where Waldo might be (the two graphs below were obtained playing with thresholds)

And if we look at the overall pictures, it is worst. Here are the white zones, and the red ones,

and again, playing with RGB thresholds, I cannot spot Waldo,

Maybe I was a bit optimistic, or ambitious. Let us try something more simple, like finding a flag on the moon. Consider the picture below on the left, and let us see if we can spot an American flag,

Again, on the left, let us identify white areas, and on the right, red ones

Then as before, let us look for horizontal stripes

Waouh, I did it ! That’s one small step for man, one giant leap for R-coders ! Or least for me… So, why might it be interesting to identify areas on pictures ? I mean, I am not Chloe O’Brian, I don’t have to spot flags in a crowd, neither Waldo, nor some terrorists (that might wear striped shirts). This might be fun if you want to give grades for your exams automatically. Consider the two following scans, the template, and a filled copy,

A first step is to identify regions where we expect to find some “red” part (I assume here that students have to use a red pencil). Let us start to check on the template and the filled form if we can identify red areas,

exam = stack("C:\\Users\\exam-blank.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE) 
exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=3, fun=mean)
plot(focalsred,useRaster=FALSE)

First, we have to identify areas where students have to fill the blanks. So in the template, identify black boxes, and get the coordinates (here manually)

exam = stack("C:\\Users\\exam-blank.png")
black = max(  exam[[1]] ,exam[[2]] , exam[[3]])<50
focalsblack = focal(black, w=3, fun=mean)
plot(focalsblack,useRaster=FALSE)
correct=locator(20)
coordinates=locator(20)
X1=c(73,115,156,199,239)
X2=c(386,428.9,471,510,554)
Y=c(601,536,470,405,341,276,210,145,79,15)
LISTX=c(rep(X1,each=10),rep(X2,each=10))
LISTY=rep(Y,10)
points(LISTX,LISTY,pch=16,col="blue")

The blue points above are where we look for students’ answers. Then, we have to define the vector of correct answers,

CORRECTX=c(X1[c(2,4,1,3,1,1,4,5,2,2)],
X2[c(2,3,4,2,1,1,1,2,5,5)])
CORRECTY=c(Y,Y)
points(CORRECTX, CORRECTY,pch=16,col="red",cex=1.3)
UNCORRECTX=c(X1[rep(1:5,10)[-(c(2,4,1,3,1,1,4,5,2,2)
+seq(0,length=10,by=5))]],
X2[rep(1:5,10)[-(c(2,3,4,2,1,1,1,2,5,5)
+seq(0,length=10,by=5))]])
UNCORRECTY=c(rep(Y,each=4),rep(Y,each=4))

Now, let us get back on red areas in the form filled by the student, identified earlier,

exam = stack("C:\\Users\\exam-filled.png")
red = (exam[[1]]>150)&(max(  exam[[2]] , exam[[3]])<150)
focalsred = focal(red, w=5, fun=mean)

Here, we simply have to compare what the student answered with areas where we expect to find some red in,

ind=which(values(focalsred)>.3)
yind=750-trunc(ind/610)
xind=ind-trunc(ind/610)*610
points(xind,yind,pch=19,cex=.4,col="blue")
points(CORRECTX, CORRECTY,pch=1,
col="red",cex=1.5,lwd=1.5)

Crosses on the graph on the right below are the answers identified as correct (here 13),

> icorrect=values(red)[(750-CORRECTY)*
+ 610+(CORRECTX)]
> points(CORRECTX[icorrect], CORRECTY[icorrect],
+ pch=4,col="black",cex=1.5,lwd=1.5)
> sum(icorrect)
[1] 13

In the case there are negative points for non-correct answer, we can count how many incorrect answers we had. Here 4.

> iuncorrect=values(red)[(750-UNCORRECTY)*610+
+ (UNCORRECTX)]
> sum(iuncorrect)
[1] 4

So I have not been able to find Waldo, but I least, that will probably save me hours next time I have to mark exams…