Optimal Portfolios, or sort of…

Last week, we got our first class on portfolio optimization. We’ve seen Markowitz’s theory where expected returns and the covariance matrix are given,

> download.file(url="http://freakonometrics.free.fr/portfolio.r",destfile = "portfolio.r")
> source("portfolio.r")
> library(zoo)
> library(FRAPO)
> library(IntroCompFinR)
> library(rrcov)
> data( StockIndex )
> pzoo = zoo ( StockIndex , order.by = rownames ( StockIndex ) )
> rzoo = ( pzoo / lag ( pzoo , k = -1) - 1 ) * 100
> Moments <- function ( x , method = c ( "CovClassic" , "CovMcd" , "CovMest" , "CovMMest" , "CovMve" , "CovOgk" , "CovSde" , "CovSest" ) , ... ) {
method <- match.arg ( method )
ans <- do.call ( method , list ( x = x , ... ) ) + return ( getCov ( ans ) )} > covmat=Moments(as.matrix(rzoo),"CovClassic")
> (covmat=round(covmat,1))
SP500 N225 FTSE100 CAC40 GDAX HSI
SP500   17.8 12.7 13.8 17.8 19.5 18.9
N225    12.7 36.6 10.8 15.0 16.2 16.7
FTSE100 13.8 10.8 17.3 18.8 19.4 19.1
CAC40   17.8 15.0 18.8 30.9 29.9 22.8
GDAX    19.5 16.2 19.4 29.9 38.0 26.1
HSI     18.9 16.7 19.1 22.8 26.1 58.1
> er=apply(as.matrix(rzoo),2,mean)
> (er=round(er,1))
SP500 N225 FTSE100 CAC40 GDAX HSI
0.6 -0.2 0.4 0.5 0.8 1.0
> ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50)

We can now visualize the efficient frontier (and admissible portfolios) below

> u=c(12,ef$sd,12,12)
> v=c(5,ef$er,-1,5)
> plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5)
> points(sqrt(diag(covmat)),er,pch=19,col="blue")
> text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6)
> polygon(u,v,border=NA,col=rgb(0,0,1,.3))

https://freakonometrics.hypotheses.org/files/2017/11/image-voronoi-post-026-1.png

That was the starting point of our class. We did also mention that something important was actually hard to visualize on that graph : the correlation between returns. It is not in the points (which are univariate, with expected return and standard deviation), but in the efficient frontier. For instance, here is our correlation matrix

> (cormat=covmat/(sqrt(diag(covmat) %*% t(diag(covmat)))))
SP500 N225 FTSE100 CAC40 GDAX HSI
SP500   1.00 0.50 0.79 0.76 0.75 0.59
N225    0.50 1.00 0.43 0.45 0.43 0.36
FTSE100 0.79 0.43 1.00 0.81 0.76 0.60
CAC40   0.76 0.45 0.81 1.00 0.87 0.54
GDAX    0.75 0.43 0.76 0.87 1.00 0.56
HSI     0.59 0.36 0.60 0.54 0.56 1.00

We can actually change the correlation between FT500 and FTSE100 (which is here .786)

courbe=function(r=.786){
R=cormat
R[1,3]=R[3,1]=r
covmat2=(sqrt(diag(covmat) %*% t(diag(covmat))))*R
ef <- efficient.frontier(er, covmat2, alpha.min=-2.5, alpha.max=2.5, nport=50)
plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return",
xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5)
points(sqrt(diag(covmat)),er,pch=19,col=c("blue","red")[c(2,1,2,1,1,1)])
text(sqrt(diag(covmat)),er,names(er),pos=4,col=c("blue","red")[c(2,1,2,1,1,1)],cex=.6)
polygon(u,v,border=NA,col=rgb(0,0,1,.3))
}

for instance, with a correlation of 0.6, we get the following efficient frontier

> courbe(.6)

and with a stronger correlation

> courbe(.9)

So clearly, correlation does matter. A lot. But more important, one should keep in mind that expected returns and covariances are not given, but estimated. Previously, we did use the standard estimator for the variance matrix. But another (more robust) estimator can be considered

covmat=Moments(as.matrix(rzoo),"CovSde")
er=apply(as.matrix(rzoo),2,mean)
ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50)
plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return",xlim=c(3.5,11),ylim=c(0,2.5),col="red",lwd=1.5)
points(sqrt(diag(covmat)),er,pch=19,col="blue")
text(sqrt(diag(covmat)),er,names(er),pos=4,col="blue",cex=.6)
polygon(u,v,border=NA,col=rgb(0,0,1,.3))

It did influence (horizontal) position of points, since variances are now different, as well as the efficient frontier, with clearly much lower variances that can be reached.

And to illustrate a last point, to illustrate the fact that we do have estimators based on observed returns, what if we had observed different ones? A way to get an idea of what might happened is to use bootstrap, e.g. of daily returns.

> covmat=Moments(as.matrix(rzoo),"CovClassic")
> er=apply(as.matrix(rzoo),2,mean)
> ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > a=sqrt(diag(covmat))
> b=er
> k=1
> plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5)
> polygon(u,v,border=NA,col=rgb(0,0,1,.3))
> for(i in 1:100){
+ id=sample(nrow(rzoo),replace=TRUE)
+ covmat=Moments(as.matrix(rzoo)[id,],"CovClassic")
+ er=apply(as.matrix(rzoo)[id,],2,mean)
+ points(sqrt(diag(covmat))[k],er[k],cex=.5)
+ }

or for another asset

Here is what we got on the (estimated) efficient frontier

> covmat=Moments(as.matrix(rzoo),"CovClassic")
> er=apply(as.matrix(rzoo),2,mean)
> ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50) > plot(ef$sd,ef$er,type="l",xlab="Standard Deviation",ylab="Expected Return", xlim=c(3.5,11),ylim=c(0,2.5),col="white",lwd=1.5)
> points(sqrt(diag(covmat)),er,pch=19,col="blue")
> text(sqrt(diag(covmat)),er,names(er),pos=4, col="blue",cex=.6)
> polygon(u,v,border=NA,col=rgb(0,0,1,.3))
> for(i in 1:100){
+ id=sample(nrow(rzoo),replace=TRUE)
+ covmat=Moments(as.matrix(rzoo)[id,],"CovClassic")
+ er=apply(as.matrix(rzoo)[id,],2,mean)
+ ef <- efficient.frontier(er, covmat, alpha.min=-2.5, alpha.max=2.5, nport=50)
+ lines(ef$sd,ef$er,col="red")
+ }

Thus, it is somehow rather difficult to assess wheter a portfolio is optimal, or not… At least from a statistical perspective….

Optimal Portfolios #1

This afternoon, I will start a crash course on financial portfolio optimization, with application in R. This week, we start with simple things, with the theoretical setup, without and with a risk free asset. We will discuss then the problem of estimating parameters, in a robust way. Then we introduce the idea of consider a more general criteria to quantify risk than the variance (but it means more general distributions… this point will be discussed further next time). The slides are available here, and R codes from there (in a Markdown)

Traffic Flow of Kota Kinabalu (with R)

This morning, we had our first practicals on network flows, using  an example mentioned in some papers published by Noraini Abdullah and Ting Kien Hua, max flow min cut theorem to minimize traffic congestion in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu. From the roads mentioned in the articles, I did try my best to locate the nodes on a map,

m=matrix(c(0,5.995910, 116.105520,
1,5.992737, 116.093718,
2,5.992066, 116.109883,
3,5.976947, 116.095760,
4,5.985766, 116.091580,
5,5.988940, 116.080112,
6,5.968318, 116.080764,
7,5.977454, 116.075460,
8,5.974226, 116.073604,
9,5.969651, 116.073753,
10,5.972341, 116.069270,
11,5.978818, 116.072880),3,12)

we can be visualized below

library(OpenStreetMap)
map = openmap(c(lat= 6.000, lon= 116.06),
c(lat= 5.960, lon= 116.12))
map=openproj(map)
plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
text(t(m[3:2,]),c("s",1:10,"t"),col="white")

If the source is realistic (up north), I do not feel very confortable with the location of the sink (on the west). But let’s pretend it’s find (to do the maths, at least).

To extract information about edge capacity, on that network use the following code that will extract the three tables from the paper

library(devtools)
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)

with Windows, it seems to be necessary to download another package first

library(devtools)
install_github("ropensci/tabulizerjars")
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)

Now we can get out data frame with capacities

B1=as.data.frame(out[[2]])
B2=as.data.frame(out[[3]])
E=data.frame(from=B1[3:20,"V3"],
to=B1[3:20,"V4"])
E=E[-c(6,8),]
capacity=as.character(B2$V3[-1])
capacity[6]="843"
capacity[4]="2913"
E$capacity=as.numeric(capacity)

We can add those edges on our map (without the arrows to indicate the direction, it would be to heavy to read)

plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
B=data.frame(i=as.character(c("s",paste("V",1:10,sep=""),"t")),
x=m[3,],y=m[2,])
for(i in 1:nrow(E)){
i1=which(B$i==as.character(E$from[i]))
i2=which(B$i==as.character(E$to[i]))
segments(B[i1,"x"],B[i1,"y"],B[i2,"x"],B[i2,"y"],lwd=3)
}
text(t(m[3:2,]),c("s",1:10,"t"),col="white")

To get the graph with capacities, an alternative is to use

library(igraph)
g=graph_from_data_frame(E)
E(g)$label=E$capacity
plot(g)

but it does not respect geographical locations of nodes. It can actually be done using

plot(g, layout=as.matrix(B[,c("x","y")]))

To get a better understanding of the capacities of the road, use

plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$capacity/200)

From that network with capacities, the goal is to determine maximum flow on that network, from the source to the sink. This can be done with R using

> (m=max_flow(graph=g, source="s", target="t"))
$value
[1] 2571

$flow
[1] 1191 1380 1422 1380 231 0 231 0 1149 1422 1149 0 0 1149 1422
[16] 1149

Our maximum flow is here 2571, which is different from was is actually claimed both in the two papers  max flow min cut theorem to… and application of the Shortest Path… (“the maximum flow for the capacitated network with 12 nodes and 16 edges of the selected scope in this study was 2598 vehicles per hour“) where there are clearly typos since values in the table and on the graph are different. Here I did use the ones from the tables.

E$flux1=m$flow
E(g)$label=E$flux1
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux1/200)

That is nice, but rather odd. Actually, a much simpler flow can be considered, but the same global value

E$flux2=c(1422,1149,1422,1149,0,0,0,0,
1149,1422,1149,0,0,1149,1422,1149)
E(g)$label=E$flux2
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux2/200)

Nice, isn’t it. It is actually possible to do exactly the same on another paper they have, on the same city, traffic congestion problem of road networks in Kota Kinabalu.

location <- 'http://www.worldresearchlibrary.org/up_proc/pdf/999-150486366625-30.pdf'
out <- extract_tables(location)
dim(out[[3]])
B1=as.data.frame(out[[3]])
E=data.frame(from=B1[2:61,"V2"],
to=B1[2:61,"V3"],
capacity=B1[2:61,"V4"])
E$capacity=as.numeric(
as.character(E$capacity))
library(igraph)
g=graph_from_data_frame(E)
m=max_flow(graph=g,
source="S",
target="T")
E$flux1=m$flow
E(g)$label=E$flux1
plot(g,
edge.width=E$flux1/200,
edge.arrow.size=0.15)

Here the value of the maximal flow is 4017, just as they found in the original paper

Multinomial Logit as an Iterated Logit Regression

For the second section of the course at ENSAE, yesterday, we’ve seen how to run a multinomial logistic regression model. It is simply an extension of the binomial logistic regression. But actually, it is also possible to consider iterative binomial regressions.

Consider here a response variable Y with a multinomial distribution (3 factors to have something more general than the binomial), taking values \{A,B,C\}, with respective probabilities \mathbf{p}=(p_A,p_B,p_C). Here is a code to generate some multinomial variables

msample=function(A,B,C){
Y=rep(NA,B)
for(i in 1:B){Y[i]=sample(A,size=1,prob=C[i,])}
return(Y)
}

and here is a code to generate a dataset with n rows,

generate3=function(n,x,pb=c(-2,0)){
set.seed(x)
X1=runif(n)
X2=runif(n)
X3=runif(n)
s1=pb[1]+X1+X2
s2=pb[2]-X1+X2
P1=exp(s1)/(1+exp(s1)+exp(s2))
P2=exp(s2)/(1+exp(s1)+exp(s2))
Y=msample(0:2,n,cbind(1-P1-P2,P1,P2))
df=data.frame(Y=Y,X1=X1,X2=X2,X3=X3)
return(df)
}

Let us generate a training dataset and a validation one

pb=c(.31,.42)
DF1=generate3(1000,1,pb=pb)
DF2=generate3(500,2,pb=pb)

With a multivariate logistic regression
\mathbb{P}[Y=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}
\mathbb{P}[Y=B|\mathbf{x}]=\frac{1}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{\alpha}]+\exp[\mathbf{x}^{\text{T}}\mathbf{\beta}]}

For convenience, consider the most popular factor in our training dataset

modalite=names(sort(table(DF1$Y),decreasing = TRUE))

Consider a regression model on the simulated dataset (with several covariates), let us estimate it, and let us get predictions.

library(nnet)
reg=multinom(as.factor(Y) ~ ., data = DF1)
mp1=predict (reg, DF1, "probs")
mp2=predict (reg, DF2, "probs")

An alternative can be the following.
consider a first regression model on the Bernoulli variable Y_A=\mathbf{1}(Y=A). Actually, we will consider the most important factor, but for convenience, assume that it is A.
\mathbb{P}[Y_A=A|\mathbf{x}]=\frac{\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}{1+\exp[\mathbf{x}^{\text{T}}\mathbf{a}]}
On our dataset, estimate that model, and get predictions. In the case where Y\neq A, define another Bernoulli variable Y_B=\mathbf{1}(Y=B|Y\neq A). We can estimate that model and derive two probabilities, \mathbb{P}(Y=B|Y\neq A) and \mathbb{P}(Y=C|Y\neq A) (the sum of the two being equal to 1). Based on those two models, it is possible to compute the three probabilities we are looking for. \mathbb{P}[Y=A] is obtained from the first model, and we can derive the other two from \mathbb{P}[Y=B|Y\neq A]\cdot\mathbb{P}[Y\neq A] and \mathbb{P}[Y=C|Y\neq A]\cdot\mathbb{P}[Y\neq A].

reg1=glm((Y==modalite[1])~.,data=DF1,family=binomial)
reg2=glm((Y==modalite[2])~.,data=DF1[-which(DF1$Y==modalite[1]),],family=binomial)
p11=predict (reg1, newdata=DF1, type="response")
p12=predict (reg2, newdata=DF1, type="response")
p21=predict (reg1, newdata=DF2, type="response")
p22=predict (reg2, newdata=DF2, type="response")
mmp1=cbind(p11,(1-p11)*p12,(1-p11)*(1-p12))
mmp2=cbind(p21,(1-p21)*p22,(1-p21)*(1-p22))
colnames(mmp1)=colnames(mmp2)=modalite

Let us compare the predicted probabilites, on the same dataset (here the training dataset)

> mmp1[1:9,c("0","1","2")]
0 1 2
1 0.19728737 0.4991805 0.3035321
2 0.17244580 0.5648537 0.2627005
3 0.19291753 0.5971058 0.2099767
4 0.09087176 0.7787304 0.1303978
5 0.23400225 0.4083022 0.3576955
6 0.18063647 0.6637352 0.1556283
7 0.13188881 0.7402710 0.1278401
8 0.13776970 0.6524959 0.2097344
9 0.12325864 0.6790336 0.1977078
> mp1[1:9,c("0","1","2")]
0 1 2
1 0.19691036 0.5022692 0.3008205
2 0.17123189 0.5680647 0.2607034
3 0.19293066 0.5984402 0.2086291
4 0.08821851 0.7813318 0.1304497
5 0.23470739 0.4109990 0.3542936
6 0.18249687 0.6602168 0.1572863
7 0.13128711 0.7400898 0.1286231
8 0.13525341 0.6553618 0.2093848
9 0.12090016 0.6815915 0.1975084

The two are very close. So yes, it is possible to see the multinomial regression as some sequential binomial regressions.

Nodal Regions and Flows

For practicals on networks and flows, we will use the R package flows dedicated to flows on networks

library(flows)
data(nav)
myflows <- prepflows(mat = nav, i = "i", j = "j", fij = "fij")
diag(myflows) <- 0

Select flows that represent at least 20% of the sum of outgoing flows for each urban area.

flowSel1 <- firstflows(mat = myflows/rowSums(myflows)*100, method = "xfirst",k = 20)

Then select the dominant flows (incoming flows criterion)

flowSel2 <- domflows(mat = myflows, w = colSums(myflows), k = 1)
flowSel <- myflows * flowSel1 * flowSel2
inflows <- data.frame(id = colnames(myflows), w = colSums(myflows))

and finally plot dominant flows map

opar <- par(mar = c(0,0,2,0))
sp::plot(GE, col = "#cceae7", border = NA)
plotMapDomFlows(mat = flowSel, spdf = UA, spdfid = "ID", w = inflows, wid = "id",wvar = "w", wcex = 0.05, add = TRUE,legend.flows.pos = "topright",legend.flows.title = "Nb. of commuters")
title("Dominant Flows of Commuters")

The code to get the background map is based on the GE object, defined in that package.

To go further on dominant flows read  Nystuen & Dacey (1961)

We will discuss in the last course, next week  two extensions that were not mentioned in the course. The first one is about congestion models. The second one is a nice application of flow to discuss sports issues in NBA (or NHL).

Traffic Flow of Kota Kinabalu (Malaysia)

For the second practicals of our course on networks and flows, we will study traffic flow of Kota Kinabalu (Malaysia), following several papers published by Noraini Abdullah and Ting Kien Hua, such as max flow min cut theorem to minimize traffic congestion in Kota Kinabalu, traffic congestion problem of road networks in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu.

 

Segmentation et Mutualisation en Assurance, à Québec

Cet après midi, je donnerais un exposé à l’Université Laval à Québec. Je suis ravi d’y retourner, surtout que ca sera (au moins) mon cinquième exposé sur ce campus, dans quatre départements différents (actuariat, statistique, informatique, une nouvelle fois au département de Finance, Assurance et Immobilier de la Faculté des Sciences d’Administration).

Les transparents de l’exposé sont en ligne.


 

 

Variables Catégorielles et Modèle Logistique

Petit complément, suite aux coquilles qu’il y avait dans les slides, sur une propriété de la régression logistique quand on régresse sur des variables catégorielles. On était sur la base des avocats

> avocat <- read.table("http://freakonometrics.free.fr/AutoBI.csv",header=TRUE,sep=",") > avocat$CLMSEX <- factor(avocat$CLMSEX, labels=c("M","F")) > avocat$MARITAL <- factor(avocat$MARITAL, labels=c("M","C","V","D")) > avocat=avocat[!is.na(avocat$CLMSEX),]
> attach(avocat)
> sum((ATTORNEY==2)&(CLMSEX=="F"),na.rm=TRUE)/sum(CLMSEX=="F",na.rm=TRUE)
[1] 0.5256065
> sum((ATTORNEY==2)&(CLMSEX=="M"),na.rm=TRUE)/sum(CLMSEX=="M",na.rm=TRUE)
[1] 0.4453925

Autrement dit, la proportion d’hommes qui se sont fait représentés par un avocat est de 44.539%, et la proportion de femmes 52.56%. On peut visualiser le tableau croisé ci-dessous

> tab=xtabs(~ATTORNEY+CLMSEX,data=avocat)
> require(vcd)
> mosaic(tab, shade=TRUE, legend=TRUE)

Quand on fait une régression linéaire (Gaussienne), on retrouve ces probabilités dans les valeurs de coefficients: pour les femmes, on a 44.539%, et pour les hommes, on a la différence avec les femmes (cette dernière modalité étant la modalité de référence ici)

> reglm = lm((ATTORNEY==2) ~ CLMSEX, data=avocat)
> summary(reglm)
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.44539 0.02060 21.620 < 2e-16 ***
CLMSEXF     0.08021 0.02756 2.911  0.00367 **

C’est ce que l’on a ci-dessous

> sum(coefficients(reglm))
[1] 0.5256065
> coefficients(reglm)[1]
(Intercept)
0.4453925

On a un résultat similaire avec une régression logistique

> reglogit = glm((ATTORNEY==2) ~ CLMSEX, data=avocat,family=binomial)
> summary(reglogit)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.21930 0.08312 -2.639 0.00833 **
CLMSEXF      0.32182 0.11097  2.900 0.00373 **

même si les coefficients n’ont pas la même interprétation. En transformant ces coefficients, on retrouve très exactement les mêmes valeurs

> exp(sum(reglogit$coefficients[1:2])) /(1+exp(sum(reglogit$coefficients[1:2])))
[1] 0.5256065
> exp(reglogit$coefficients[1]) /(1+exp(reglogit$coefficients[1]))
(Intercept)
0.4453925

Désolé pour la typo.

Actuariat de l’Assurance Non-Vie #3

Mardi, seconde partie du cours d’actuariat, avec les modèles de classification le matin, mais l’après midi, on devrait commencer les modèles de comptage. Les slides sont en ligne.

Le matin, je dois intervenir 15 minutes à l’IHP vers 9 heures, dans un colloque sur Artificial Intelligence for Fintech and Insurtech: si le RER traîne un peu trop, entre Luxembourg et Lozère, j’aurais peut-être 5 minutes de retard….

An Open Lab-Notebook Experiment