Tag Archives: trees

Probabilistic Scores of Classifiers, Calibration is not Enough

Our paper “Probabilistic Scores of Classifiers, Calibration is not Enough”, with Agathe Fernandes Machado, Emmanuel Flachaire, Ewen Gallic and François Hu is now available on https://arxiv.org/abs/2408.03421

In binary classification tasks, accurate representation of probabilistic predictions is essential for various real-world applications such as predicting payment defaults or assessing medical risks. The model must then be well-calibrated to ensure alignment between predicted probabilities and actual outcomes. However, when score heterogeneity deviates from the underlying data probability distribution, traditional calibration metrics lose reliability, failing to align score distribution with actual probabilities. In this study, we highlight approaches that prioritize optimizing the alignment between predicted scores and true probability distributions over minimizing traditional performance or calibration metrics. When employing tree-based models such as Random Forest and XGBoost, our analysis emphasizes the flexibility these models offer in tuning hyperparameters to minimize the Kullback-Leibler (KL) divergence between predicted and true distributions. Through extensive empirical analysis across 10 UCI datasets and simulations, we demonstrate that optimizing tree-based models based on KL divergence yields superior alignment between predicted scores and actual probabilities without significant performance loss. In real-world scenarios, the reference probability is determined a priori as a Beta distribution estimated through maximum likelihood. Conversely, minimizing traditional calibration metrics may lead to suboptimal results, characterized by notable performance declines and inferior KL values. Our findings reveal limitations in traditional calibration metrics, which could undermine the reliability of predictive models for critical decision-making.

Trees and forests

For my ACT6100 weekly quiz, I usually generate some datasets, and then ask students to compare various predictive algorithms. Last week, it was about classification trees and random forests. And students were surprised to have such differences (they had to estimate the probability to have a specific label, for the barycenter of the covariates).

Usually, I use the following to generate some (here 12) covariates that could be correlated

library(FactoMineR)
n=279
library(clusterGeneration)
library(mnormt)
k=12
S=genPositiveDefMat("unifcorrmat",dim=k)
X=round(rmnorm(n,varcov=S$Sigma)+8,2)
rownames(X)=1:n
colnames(X)=LETTERS[1:k]

Then I need to generate some data, based on some covariates (5 out of 12), with various strengths

idx = sample(1:k,size=5)
u = sample(c(-(4:1),1:4),5)
beta = rep(0,k)
beta[idx] = u
U = X%*%beta
U = U-min(U)
U = U/max(U)*6-3
p = exp(( U))/(1+exp((U )))
Y = rbinom(n,size=1,prob=p)
df = data.frame(Y=as.factor(Y),X)
levels(df$Y)=levels=c("blue","red")

We can run a classification tree

library(rpart)
arbre = rpart(Y~., data=df)

and a random forest,

library(randomForest)
set.seed(1)
arbres = randomForest(Y~., data=df)

Here are the partial plots for 4 of the explanatory variables that actually have an impact

partialPlot(arbres,pred.data = df, x.var = "A")


Predictions for the “average” point of the dataset is here

(parbre = predict(arbre,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
       blue       red
1 0.8064516 0.1935484
(parbres = predict(arbres,newdata=data.frame(t(apply(df[,-1],2,mean))),type = "prob"))
   blue   red
1 0.422 0.578
attr(,"class")
[1] "matrix" "votes"

and there is a substantial difference, with a probability of 19% with a single tree, 58% with 500 trees (the default value of the function).

To understand why we can have such a difference, we should not only focus on the bagging stratgy, but look at the variability of the predictions, obtained with trees,

B=1e4
parbres = rep(NA,B)
m=data.frame(t(apply(df[,-1],2,mean)))
for(b in 1:B){
  idx = sample(1:nrow(df),size=nrow(df),replace=TRUE)
  arbre = rpart(Y~., data=df[idx,])
  parbres[b] = predict(arbre,newdata=m,type = "prob")[2]
}
hist(parbres)

Surprisingly, we have here a bimodal function for \hat{y} which is either very small for some trees, of very large for others. On average, we have a value close to 55%… I think I will use more that generative algorithm for future quiz…

Classification from scratch, boosting 11/8

Eleventh post of our series on classification from scratch. Today, that should be the last one… unless I forgot something important. So today, we discuss boosting.

An econometrician perspective

I might start with a non-conventional introduction. But that’s actually how I understood what boosting was about. And I am quite sure it has to do with my background in econometrics.

The goal here is to solve something which looks likem^\star=\underset{m\in\mathcal{M}}{\text{argmin}}\left\lbrace\sum_{i=1}^n \ell(y_i,m(\mathbf{x}_i))\right\rbracefor some loss function \ell, and for some set of predictors \mathcal{M}. This is an optimization problem. Well, optimization is here in a function space, but still, that’s simply an optimization problem. And from a numerical perspective, optimization is solve using gradient descent (this is why this technique is also called gradient boosting). And the gradient descent can be visualized like below

Again, the optimum is not some some real value x^\star, but some function m^\star. Thus, here we will have something likem^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(y_i,m^{(k-1)}(\mathbf{x}_i)+h(\mathbf{x}_i))\right\rbrace(as they write it is serious articles) where the term on the right can also be writtenm^{(k)}=m^{(k-1)}+\underset{h\in\mathcal{H}}{\text{argmin}}\left\lbrace \sum_{i=1}^n \ell(\underbrace{y_i-m^{(k-1)}(\mathbf{x}_i)}_{\varepsilon_{k,i}},h(\mathbf{x}_i))\right\rbraceI prefer the later, because we see clearly that f is some model we fit on the remaining residuals.

We can rewrite it like that: definer_{i,k}=-\left.\frac{\partial \ell(y_i,m(\mathbf{x}_i))}{\partial m(\mathbf{x}_i)}\right\vert_{m(\mathbf{x}_i)=m^{(k-1)}(\mathbf{x}_i)}for all i=1,\cdots,n. The goal is to fit a model so that r_{i,k}=h^\star(\mathbf{x}_i), and when we have that optimal function, set m_k(\mathbf{x})=m_{k-1}(\mathbf{x})+\gamma_k h^\star(\mathbf{x}) (yes, we can include some shrinkage here).

Two important comments here. First of all, the idea should be weird to any econometrician. First, we fit a model to explain y by some covariates \mathbf{x}. Then consider the residuals \widehat{\varepsilon}, and to explain them with the same covariate \mathbf{x}. If you try that with a linear regression, you’d done at the end of step 1, since residuals \widehat{\varepsilon} are orthogonal to covariates \mathbf{x}: no way that we can learn from them. Here it works because we consider simple non linear model. And actually, something that can be used is to add a shrinkage parameter. Do not consider \widehat{\varepsilon}=y-\widehat{m}(\mathbf{x}) but \widehat{\varepsilon}=y-\gamma\widehat{m}(\mathbf{x}). The idea of weak learners is extremely important here. The more we shrink, the longer it will take, but that’s not (too) important.

I should also mention that it’s nice to keep learning from our mistakes. But somehow, we should stop, someday. I said that I will not mention this part in this series of posts, maybe later on. But heuristically, we should stop when we start to overfit. And this can be observed either using a split training/validation of the initial dataset or to use cross validation. I will get back on that issue later one in this post, but again, those ideas should probably be dedicated to another series of posts.

Learning with splines

Just to make sure we get it, let’s try to learn with splines. Because standard splines have fixed knots, actually, we do not really “learn” here (and after a few iterations we get to what we would have with a standard spline regression). So here, we will (somehow) optimize knots locations. There is a package to do so. And just to illustrate, use a Gaussian regression here, not a classification (we will do that later on). Consider the following dataset (with only one covariate)

n=300
 set.seed(1)
 u=sort(runif(n)*2*pi)
 y=sin(u)+rnorm(n)/4
 df=data.frame(x=u,y=y)

For an optimal choice of knot locations, we can use

library(freeknotsplines)
xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)

With 5% shrinkage, the code it simply the following

v=.05
 library(splines)
 xy.freekt=freelsgen(df$x, df$y, degree = 1, numknot = 2, 555)
 fit=lm(y~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
 yp=predict(fit,newdata=df)
 df$yr=df$y - v*yp
 YP=v*yp
 for(t in 1:200){
   xy.freekt=freelsgen(df$x, df$yr, degree = 1, numknot = 2, 555)
   fit=lm(yr~bs(x,degree=1,knots=xy.freekt@optknot),data=df)
   yp=predict(fit,newdata=df)
   df$yr=df$yr - v*yp
   YP=cbind(YP,v*yp)}
 nd=data.frame(x=seq(0,2*pi,by=.01))
 viz=function(M){
    if(M==1)  y=YP[,1]
    if(M>1)   y=apply(YP[,1:M],1,sum)
    plot(df$x,df$y,ylab="",xlab="")
    lines(df$x,y,type="l",col="red",lwd=3)
    fit=lm(y~bs(x,degree=1,df=3),data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="l",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}

To visualize the ouput after 100 iterations, use

viz(100)


Clearly, we see that we learn from the data here… Cool, isn’t it?

Learning with stumps (and trees)

Let us try something else. What if we consider at each step a regression tree, instead of a linear-by-parts regression (that was considered with linear splines).

library(rpart)
v=.1 
fit=rpart(y~x,data=df)
yp=predict(fit)
df$yr=df$y - v*yp
YP=v*yp
for(t in 1:100){
  fit=rpart(yr~x,data=df)
  yp=predict(fit,newdata=df)
  df$yr=df$yr - v*yp
  YP=cbind(YP,v*yp)}

Again, to visualise the learning process, use

viz=function(M){
y=apply(YP[,1:M],1,sum)
plot(df$x,df$y,ylab="",xlab="")
lines(df$x,y,type="s",col="red",lwd=3)
fit=rpart(y~x,data=df)
yp=predict(fit,newdata=nd)
lines(nd$x,yp,type="s",col="blue",lwd=3)
lines(nd$x,sin(nd$x),lty=2)}


This time, with those trees, it looks like not only we have a good model, but also a different model from the one we can get using a single regression tree.

What if we change the shrinkage parameter?

viz=function(v=0.05){
  fit=rpart(y~x,data=df)
  yp=predict(fit)
  df$yr=df$y - v*yp
  YP=v*yp
  for(t in 1:100){
    fit=rpart(yr~x,data=df)
    yp=predict(fit,newdata=df)
    df$yr=df$yr - v*yp
    YP=cbind(YP,v*yp)}
  y=apply(YP,1,sum)
    plot(df$x,df$y,xlab="",ylab="")
    lines(df$x,y,type="s",col="red",lwd=3)
    fit=rpart(y~x,data=df)
    yp=predict(fit,newdata=nd)
    lines(nd$x,yp,type="s",col="blue",lwd=3)
    lines(nd$x,sin(nd$x),lty=2)}


There is clearly an impact of that shrinkage parameter. It has to be small to get a good model. This is the idea of using weak learners to get a good prediction.

Classification and Adaboost

Now that we understand how bootsting works, let’s try to adapt it to classification. It will be more complicated because residuals are usually not very informative in a classification. And it will be hard to shrink. So let’s try something slightly different, to introduce the adaboost algorithm.

In our initial discussion, the goal was to minimize a convex loss function. Here, if we express classes as \{-1,+1\}, the loss function we consider is e^{-y\cdot m(\mathbf{x})} (this product y\cdot m(\mathbf{x})) was already discussed when we’ve seen the SVM algorithm. Note that the loss function related to the logistic model would be \log(1+e^{-y\cdot m(\mathbf{x})}).

What we do here is related to gradient descent (or Newton algorithm). Previously, we were learning from our errors. At each iteration, the residuals are computed and a (weak) model is fitted to these residuals. The the contribution of this weak model is used in a gradient descent optimization process. Here things will be different, because (from my understanding) it is more difficult to play with residuals, because null residuals never exist in classifications. So we will add weights. Initially, all the observations will have the same weights. But iteratively, we ill change them. We will increase the weights of the wrongly predicted individuals and decrease the ones of the correctly predicted individuals. Somehow, we want to focus more on the difficult predictions. That’s the trick. And I guess that’s why it performs so well. This algorithm is well described in wikipedia, so we will use it.

We start with \mathbf{\omega}_0=\mathbf{1}/n, then at each step fit a model (a classification tree) with weights \mathbf{\omega}_k(we did not discuss weights in the algorithms of trees, but it is straigtforward in the formula actually). Let \widehat{h}_{\mathbf{\omega}_k} denote that model (i.e. the probability in each leaves). Then consider the classifier 2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\cdot)>0.5]-1 which returns a value in \{-1,+1\}. Then set \varepsilon_k=\sum_{i\in\mathcal{I}_k}\omega_i where \mathcal{I}_k is the set of misclassified individuals,\mathcal{I}_k=\big\lbrace i:2~\mathbf{1}[\widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)>0.5]-1\neq y_i\big\rbrace Then set \alpha_k = \frac{1}{2} \ln \left(\frac{1-\epsilon_k}{\epsilon_k}\right)and update finally the model usingm_{k=1}=m_k+\alpha_k\widehat{h}_{\mathbf{\omega}_k}as well as the weights\mathbf{\omega}_{k+1}=\mathbf{\omega}_k e^{-\mathbf{y} \alpha_k \widehat{h}_{\mathbf{\omega}_k}(\mathbf{x}_i)}(of course, devide by the sum to insure that the total sum is then 1). And as previously, one can include some shrinkage. To visualize the convergence of the process, we will plot the total error on our dataset.

n_iter = 100
y = (myocarde[,"PRONO"]==1)*2-1
x = myocarde[,1:7]
error = rep(0,n_iter) 
f = rep(0,length(y)) 
w = rep(1,length(y)) #
alpha = 1
library(rpart)
for(i in 1:n_iter){
  w = exp(-alpha*y*f) *w 
  w = w/sum(w)
  rfit = rpart(y~., x, w, method="class")
  g = -1 + 2*(predict(rfit,x)[,2]>.5) 
  e = sum(w*(y*g<0))
  alpha = .5*log ( (1-e) / e )
  alpha = 0.1*alpha 
  f = f + alpha*g
  error[i] = mean(1*f*y<0)
}
plot(seq(1,n_iter),error,type="l",
     ylim=c(0,.25),col="blue",
     ylab="Error Rate",xlab="Iterations",lwd=2)


Here we face a classical problem in machine learning: we have a perfect model. With zero error. That is nice, but not interesting. It is also possible in econometrics, with polynomial fits: with 10 observations, and a polynomial of degree 9, we have a perfect fit. But a poor model. Here it is the same. So the trick is to split our dataset in two, a training dataset, and a validation one

set.seed(123)
id_train = sample(1:nrow(myocarde), size=45, replace=FALSE)
train_myocarde = myocarde[id_train,]
test_myocarde = myocarde[-id_train,]

We construct the model on the first one, and we check on the second one that it’s not that bad…

y_train = (train_myocarde[,"PRONO"]==1)*2-1
x_train =  train_myocarde[,1:7]
y_test = (test_myocarde[,"PRONO"]==1)*2-1
x_test = test_myocarde[,1:7]
train_error = rep(0,n_iter) 
test_error = rep(0,n_iter)
f_train = rep(0,length(y_train))
f_test = rep(0,length(y_test)) 
w_train = rep(1,length(y_train)) 
alpha = 1
for(i in 1:n_iter){
  w_train = w_train*exp(-alpha*y_train*f_train) 
  w_train = w_train/sum(w_train)
  rfit = rpart(y_train~., x_train, w_train, method="class")
  g_train = -1 + 2*(predict(rfit,x_train)[,2]>.5)
  g_test = -1 + 2*(predict(rfit,x_test)[,2]>.5)
  e_train = sum(w_train*(y_train*g_train<0))
  alpha = .5*log ( (1-e_train) / e_train )
  alpha = 0.1*alpha 
  f_train = f_train + alpha*g_train
  f_test = f_test + alpha*g_test
  train_error[i] = mean(1*f_train*y_train<0)
  test_error[i] = mean(1*f_test*y_test<0)}
plot(seq(1,n_iter),test_error,col='red')
lines(train_error,lwd=2,col='blue')


Here, as previously, after 80 iterations, we have a perfect model on the training dataset, but it behaves badly on the validation dataset. But with 20 iterations, it seems to be ok…

R function

Of course, it’s possible to use R functions,

library(gbm)
gbmWithCrossValidation = gbm(PRONO ~ .,distribution = "bernoulli",
data = myocarde,n.trees = 2000,shrinkage = .01,cv.folds = 5,n.cores = 1)
bestTreeForPrediction = gbm.perf(gbmWithCrossValidation)

Here cross-validation is considered, and not training/validation, as well as forests instead of single trees, but overall, the idea is the same… Off course, the output is much nicer (here the shrinkage is a very small parameter, and learning is extremely slow)

Growing some Trees

Consider here the dataset used in a previous post, about visualising a classification (with more than 2 features),

> MYOCARDE=read.table(
+ "http://freakonometrics.free.fr/saporta.csv",
+ header=TRUE,sep=";")

The default classification tree is

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE)
> rpart.plot(arbre,type=4,extra=6)

We can change the options here, such as the minimum number of observations, per node

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+       control=rpart.control(minsplit=10))
> rpart.plot(arbre,type=4,extra=6)

or

> arbre = rpart(factor(PRONO)~.,data=MYOCARDE,
+        control=rpart.control(minsplit=5))
> rpart.plot(arbre,type=4,extra=6)

Continue reading Growing some Trees

Pricing options on multiple assets

I am a big fan of trees. It is a very nice way to see how financial pricing works, for derivatives. An with a matrix-based language (R for instance), it is extremely simple to compute almost everything. Even options multiple assets. Let us see how it works. But first, I have to assume that everyone knows about trees, and risk neutral probabilities, and is familiar with standard financial derivatives. Just in case, I can upload some old slides of the first course on asset pricing we gave a few years ago at École Polytechnique.

Let us get back on the pricing of (European) call options, with trees.The idea is simple. We have to fix the number of periods. Let us start with only one (as described in the slides above). The stock has price and can go either up, and then have price or go down, and have price . And the fundamental theorem of asset pricing says that we do not really care about probabilities of going up, or down. Assuming that we can buy or sell that stock, and that a risk free asset is available on the market, it is possible to price any contingent financial product, like a financial option. Since we know the final value of the option when the stock goes either up, or down, it is possible to replicate the payoff of that option using the stock and the risk free asset. And we can prove that the price of the option is simply

where the probability is the so-called risk neutral probability

So, we’ve done it here with only one single period, but it is possible to extend it to multiperiods. The idea is to keep that multiplicative representation of possible values of the stock, and to get a recombinant tree. At step 2, the stock can take only three different values: went up twice, went down twice, or went up and down (or the reverse, but we don’t care: this is the point of recombining). If we write things down, then we can prove that

for some probability parameter (the so-call risk neutral probability, if it is unique). But we do not really care about those closed formula, the goal is to write an algorithm which computes the tree, and return the price of a call option (say). But before starting, we have to make a connection between that model with up and down prices, and the parameters of the Black-Scholes diffusion, for the stock price. The idea is to identify the first and the second moment, i.e.

(where, under the risk neutral probability, the trend is the risk free rate) and

The code might look like that

n=5; T=1; r=0.05; sigma=.4;S=50;K=50
price=function(n){
u.n=exp(sigma*sqrt(T/n));
d.n=1/u.n
p.n=(exp(r*T/n)-d.n)/(u.n-d.n)
SJ=matrix(0,n+1,n+1)
SJ[1,1]=S
for(i in(2:(n+1)))
{for(j in(1:i)){SJ[i,j]=S*u.n^(i-j)*d.n^(j-1)}}
OPT=matrix(0,n+1,n+1)
OPT[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
for(i in(n:1))
{for(j in(1:i)){OPT[i,j]=exp(-r*T/n)*(OPT[i+1,j]*p.n+
(1-p.n)*OPT[i+1,j+1])}}
return(OPT[1,1])
}

We can plot the evolution of the price, as a function of the number of time periods (or subdivision of the time interval, from now till maturity of the European option),

N=10:400
V=Vectorize(price)(N)
plot(N,V,type="l")

Note that we can compare with the Black-Scholes price of this call option, given by

where

and

d1=1/(sigma*sqrt(T))*(log(S/K)+(r+sigma^2/2)*T)
d2=d1-sigma*sqrt(T)
BS=S*pnorm(d1)-K*exp(-r*T)*pnorm(d2)
abline(h=BS,lty=2,col="red")

The code is clearly not optimal, but at least, we see what’s going on. For instance, we do not need a matrix when we calculate using backward recursions the price of the option. We can just keep a single vector. But this matrix is nice, because we can use it to price American options. For instance, with the code below, we compare the price of an American put option, and the price of European put option.

price.american=function(n,opt="put"){
u.n=exp(sigma*sqrt(T/n)); d.n=1/u.n
p.n=(exp(r*T/n)-d.n)/(u.n-d.n)
SJ=matrix(0,n+1,n+1)
SJ[1,1]=S
for(i in(2:(n+1)))
{for(j in(1:i)) {SJ[i,j]=S*u.n^(i-j)*d.n^(j-1)}}
OPTe=matrix(0,n+1,n+1)
OPTa=matrix(0,n+1,n+1)
if(opt=="call"){
OPTa[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
OPTe[n+1,]=(SJ[n+1,]-K)*(SJ[n+1,]>K)
}
if(opt=="put"){
OPTa[n+1,]=(K-SJ[n+1,])*(SJ[n+1,]<K)
OPTe[n+1,]=(K-SJ[n+1,])*(SJ[n+1,]<K)
}
for(i in(n:1))
{
for(j in(1:i))
{if(opt=="call"){
OPTa[i,j]=max((SJ[i,j]-K)*(SJ[i,j]>K),
exp(-r*T/n)*(OPTa[i+1,j]*p.n+
(1-p.n)*OPTa[i+1,j+1]))}
if(opt=="put"){
OPTa[i,j]=max((K-SJ[i,j])*(K>SJ[i,j]),
exp(-r*T/n)*(OPTa[i+1,j]*p.n+
(1-p.n)*OPTa[i+1,j+1]))}

OPTe[i,j]=exp(-r*T/n)*(OPTe[i+1,j]*p.n+
(1-p.n)*OPTe[i+1,j+1])}}
priceop=c(OPTe[1,1],OPTa[1,1])
names(priceop)=c("E","A")
return(priceop)}

It is possible to compare those price, obtained on trees, with prices given by closed (approximated) formulas.

> d1=1/(sigma*sqrt(T))*(log(S/K)+(r+sigma^2/2)*T)
> d2=d1-sigma*sqrt(T)
> (BS=-S*pnorm(-d1)+K*exp(-r*T)*pnorm(-d2)  )
[1] 6.572947
> N=10:200
> M=Vectorize(price.american)(N)
> plot(N,M[1,],type='l',col='blue',ylim=range(M))
> lines(N,M[2,],type='l',col='red')
> abline(h=BS,lty=2,col='blue')
> library(fOptions)
> (am=BAWAmericanApproxOption(TypeFlag =
+ "p", S = S,X = K, Time = T, r = r,
+ b = r, sigma =sigma)@price)
[1] 6.840335
> abline(h=am,lty=2,col='red')

Another great thing with trees, is that it becomes possible to plot to region where it is optimal to exercise our right to sell the stock.

Let us move now to a model with two assets, as suggested by Rubinstein (1994). First, observe that a discretization of two independent Brownian motions will be based on two independent random walk, taking values

i.e. both went up (NW), both went down (SE), and one went up while the other went down (either NE or SW). With independent and symmetric random walks, the probabilities will be respectively 1/4. An if we move one step foreward, we have the following tree.

Here it is still recombining. But the size will increase much faster than in the univariate case. Now, assume that there might be some correlation. Then one can consider the following values, to have a specific correlation,

And again, the idea is then to identify the first two moments. This gives us the following system of equations for the four respective (risk neutral) probabilities

For those willing to do the maths, please do. The answer should be

and for the last one

The code here looks like that

price.spead=function(n){
T=1; r=0.05; K=0
S1=105
S2=100
sigma1=0.4
sigma2=0.3
rho=0.5
u1.n=exp(sigma1*sqrt(T/n)); d1.n=1/u1.n
u2.n=exp(sigma2*sqrt(T/n)); d2.n=1/u2.n

v1=r-sigma1^2/2; v2=r-sigma2^2/2
puu.n=(1+rho+sqrt(T/n)*(v1/sigma1+v2/sigma2))/4
pud.n=(1-rho+sqrt(T/n)*(v1/sigma1-v2/sigma2))/4
pdu.n=(1-rho+sqrt(T/n)*(-v1/sigma1+v2/sigma2))/4
pdd.n=(1+rho+sqrt(T/n)*(-v1/sigma1-v2/sigma2))/4
k=0:n
un=matrix(1,n+1,1)
SJ= (S1 * d1.n^k * u1.n^(n-k-1)) %*% t(un) -
un %*%t(S2 * d2.n^k * u2.n^(n-k-1))
OPT=(SJ)*(SJ>K)
for(k in(n:1))
{
OPT0=matrix(0,k,k)
for(i in(1:k))
{
for(j in(1:k))
{OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)}}
OPT=OPT0}
return(OPT[1,1])}

If we look at the details, consider two periods, like on the figure above, the are nine values for the spread,

> n=2
> SJ
[,1]      [,2]       [,3]
[1,]  32.02217  84.86869 119.443578
[2,] -47.84652   5.00000  39.574891
[3,] -93.20959 -40.36308  -5.788184

and the payoff of the option is here

> OPT
[,1]     [,2]      [,3]
[1,] 32.02217 84.86869 119.44358
[2,]  0.00000  5.00000  39.57489
[3,]  0.00000  0.00000   0.00000

So if we go backward of one step, we have the following square of values

> k=n
> OPT0<-matrix(0,k,k)
> for(i in(1:k))
+ {
+   for(j in(1:k))
+   {
+     OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
+ OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)
+ }
+ }
> OPT0
[,1]      [,2]
[1,] 22.2741190 58.421275
[2,]  0.5305465  5.977683

The idea is then to move backward once more,

> OPT=OPT0
> OPT0<-matrix(0,k,k)
> for(i in(1:k))
+ {
+   for(j in(1:k))
+   {
+     OPT0[i,j]=(OPT[i,j]*puu.n+OPT[i+1,j]*pdu.n+
+ OPT[i,j+1]*pud.n+OPT[i+1,j+1]*pdd.n)*exp(-r*T/n)
+ }
+ }
> OPT0
[,1]
[1,] 16.44106

Here calculations are much (much) longer,

> price.spead(250)
[1]  15.66496

and again, it is possible to use standard approximations to compare that price with a more standard one,

> (sp=SpreadApproxOption(TypeFlag =
+ "c", S1 = 105, S2 = 100, X = 0,
+ Time = 1, r = .05, sigma1 = .4,
+ sigma2 = .3, rho = .5)@price)
[1]  15.65077

Well, playing with trees is nice, but it might not be optimal for complex products. Next time, we’ll discuss other techniques…