Tag Archives: STT3030

More neurons in the hidden layer than predictive features in neural nets

This week, we were talking about neural networks for the first time, and I was saying that, in many illustrations of neural networks, there was a layer with fewer neurons than predictive variables,

but sometimes, it could make sense to have more neurons in the layer than predictive variables,

To illustrate, consider a simple example with one single variable x, and a binary outcome y\in\{0,1\}

set.seed(12345)
n = 100
x = c(runif(n),1+runif(n),2+runif(n))
y = rep(c(0,1,0),each=n)

We should insure that observations are in the [0,1] interval,

minmax = function(z) (z-min(z))/(max(z)-min(z))
xm = minmax(x)
df = data.frame(x=xm,y=y)

just like what we can visualize below

plot(df$x,rep(0,3*n),col=1+df$y)

Here, the blue and the red dots (when y is either 0 or 1) are not linearly separable. The standard activation function in neural nets is the sigmoid

sigmoid = function(x) 1 / (1 + exp(-x))

Let us compute a neural network

library(nnet)
set.seed(1234)
model_nnet = nnet(y~x,size=2,data=df)

We can then get the weights, and we can visualize the two neurons

w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
b = w$wts$`out 1`
plot(sigmoid(x1),sigmoid(x2),col=1+df$y)

 

Now, the the blue and the red dots (when y is either 0 or 1) are actually linearly separable.

abline(a=-b[1]/b[3],b=-b[2]/b[3])

If we do not specify the seed of the random generator, we can get a different outcome since, obviously, this model is not identifiable

or

If we now have

set.seed(12345)
n=100
x=c(runif(n),1+runif(n),2+runif(n),3+runif(n))
y=rep(c(0,1,0,1),each=n)
xm = minmax(x)
df = data.frame(x=xm,y=y)
plot(df$x,rep(0,4*n),col=1+df$y)

then we need more neurons (one more, at least)

set.seed(321)
model_nnet = nnet(y~x,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1),
y=sigmoid(x2), z=sigmoid(x3),color=1+df$y)

And one more time, we have been able to separate (linearly) the blue and the red points (just imagine the plane, I did not manage to add it on the 3d scatterplot)

Finally, consider

set.seed(123)
n=500
x1=runif(n)*3-1.5
x2=runif(n)*3-1.5
y = (x1^2+x2^2)<=1
x1m = minmax(x1)
x2m = minmax(x2)
df = data.frame(x1=x1m,x2=x2m,y=y)
plot(df$x1,df$x2,col=1+df$y)

and again, we three neurons (for two explanatory variables) we can, linearly, separate the blue and the red points

set.seed(1234)
model_nnet = nnet(y~x1+x2,size=3,data=df)
w = neuralweights(model_nnet)
x1 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 1"
x2 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 2"
x3 = cbind(1,df$x1,df$x2)%*%w$wts$"hidden 1 3"
b = w$wts$`out 1`
library(scatterplot3d)
s3d = scatterplot3d(x=sigmoid(x1), y=sigmoid(x2), z=sigmoid(x3),
color=1+df$y)

Here, neural networks play the rule of the kernel trick, as coined in Koutroumbas, K. & Theodoridis, S. (2008). Pattern Recognition. Academic Press

The m=√p rule for random forests

A couple of days ago, in our lab session, we discussed random forrests, and, since it was based on the example in ISLR, we had a quick discussion about the random choice of features, and the “m=\sqrt{p}” rule

Interestingly, on that one, we can play a bit, and try all choices, and do it again, on a different train/test split,

library(randomForest)
library(ISLR2)
set.seed(123)

sim = function(t){
train = sample(nrow(Boston), size = nrow(Boston)*.7)
subsim = function(i){
rf.boston <- randomForest(medv ~ ., data = Boston,
subset = train, mtry = i)
yhat.rf <- predict(rf.boston, newdata = Boston[-train, ])
mean((yhat.rf - Boston[-train, "medv"])^2)
}
Vectorize(subsim)(2:12)
}
M=Vectorize(sim)(1:499)

and now we can plot it, with the MSE on the test dataset, as a function of m, the number of features selected, at each node

boxplot(t(M))

or more clearly

vm=apply(M,1,mean)
plot(2:12,vm,type="b",pch=19,ylim=c(10.5,15))
abline(v=sqrt(12),col="red")

Even if here, the “m=\sqrt{p}” rule might not be optimal, we can see that using a random forest instead of a bagging strategy, i.e. “m<\sqrt{p}“, could improve predictions (and not only make the code run faster).

Calculating an LOOCV MSE by hand

Last week, we had an “mid-term” exam, for our introduction to statistical learning course.  The question is simple: consider three points, (x_i,y_i), here \{(0,2),(2,2),(3,1)\}Consider here some linear models, estimated using least square techniques, what would be the leave-one-out cross-validation MSE ?

I like this exercise since we can compute everything easily, by hand. Since at each step we remove one single observation, only two observations remain in the sample. In with two points, fiting a linear model is straightforward (whatever the technique considered). Here, we’re simply considering the straight line that passes through the other two points. And since we have the straight line (without the minimal calculation of minimizing the sum of squared errors), we have the error committed on the omitted observation. This is exactly what we see in the drawing below

In other words, the LOOCV MSE is here{\displaystyle\operatorname{MSE}={\frac{1}{n}}\sum_{i=1}^{n}\left(Y_{i}-{\hat {Y_{i}}^{(-i)}}\right)^{2}}, where, intuitively, \hat {Y_{i}}^{(-i)} denotes the prediction associated with x_i with the model obtained on the other n-1 observations. Thus, here{\displaystyle\operatorname{MSE}=\frac{1}{3}\big(2^2+\frac{2^2}{3^2}+1^2\big)=\frac{1}{27}\big(36+4+9\big)=\frac{49}{27}}Note that we can also use R to compute that quantity,

> x = c(0,2,3)
> y = c(2,2,1)
> df = data.frame(x=x,y=y)
> yp = rep(NA,3)
> for(i in 1:3){
+ reg = lm(y~x, data=df[-i,])
+ yp[i] = predict(reg,newdata=df)[i]
+ }
> 1/3*sum((yp-y)^2)
[1] 1.814815

which is precisely what we obtained, by hand.