We will start, in our Data Science course, to discuss classification techniques (in the context of supervised models). Consider the following case, with 10 points, and two classes (red and blue)
> clr1 <- c(rgb(1,0,0,1),rgb(0,0,1,1)) > clr2 <- c(rgb(1,0,0,.2),rgb(0,0,1,.2)) > x <- c(.4,.55,.65,.9,.1,.35,.5,.15,.2,.85) > y <- c(.85,.95,.8,.87,.5,.55,.5,.2,.1,.3) > z <- c(1,1,1,1,1,0,0,1,0,0) > df <- data.frame(x,y,z) > plot(x,y,pch=19,cex=2,col=clr1[z+1])
To get a prediction, i.e. a partition of the space in two parts, consider some logistic regression
> reg=glm(z~x+y,data=df,family=binomial) > summary(reg) Call: glm(formula = z ~ x + y, family = binomial, data = df) Deviance Residuals: Min 1Q Median 3Q Max -1.6593 -0.4400 0.2564 0.5830 1.5374 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.706 1.999 -0.854 0.393 x -5.489 5.360 -1.024 0.306 y 8.568 5.515 1.554 0.120 (Dispersion parameter for binomial family taken to be 1) Null deviance: 13.4602 on 9 degrees of freedom Residual deviance: 8.1445 on 7 degrees of freedom AIC: 14.144 Number of Fisher Scoring iterations: 5
Given some point, the predicted class is obtained using
> pred_1 <- function(x,y){ + predict(reg,newdata=data.frame(x=x, + y=y),type="response")>.5 + }
(here, the predicted class is simply the one that is the most likely). To visualize it use
> x_grid<-seq(0,1,length=101) > y_grid<-seq(0,1,length=101) > z_grid <- outer(x_grid,y_grid,pred_1) > image(x_grid,y_grid,z_grid,col=clr2) > points(x,y,pch=19,cex=2,col=clr1[z+1])
Since the logistic regression is a (generalized) linear model, the line that separate the two regions is a straight line.
Continue reading Supervised Classification, Logistic and Multinomial