Tag Archives: network

Solving the chinese postman problem

Some pre-Halloween post today. It started actually while I was in Barcelona : kids wanted to go back to some store we’ve seen the first day, in the gothic part, and I could not remember where it was. And I said to myself that would be quite long to do all the street of the neighborhood. And I discovered that it was actually an old problem. In 1962, Meigu Guan was interested in a postman delivering mail to a number of streets such that the total distance walked by the postman was as short as possible. How could the postman ensure that the distance walked was a minimum?

A very close notion is the concept of traversable graph, which is one that can be drawn without taking a pen from the paper and without retracing the same edge. In such a case the graph is said to have an Eulerian trail (yes, from Euler’s bridges problem). An Eulerian trail uses all the edges of a graph. For a graph to be Eulerian all the vertices must be of even order.

An algorithm for finding an optimal Chinese postman route is:

  1. List all odd vertices.
  2. List all possible pairings of odd vertices.
  3. For each pairing find the edges that connect the vertices with the minimum weight.
  4. Find the pairings such that the sum of the weights is minimised.
  5. On the original graph add the edges that have been found in Step 4.
  6. The length of an optimal Chinese postman route is the sum of all the edges added to the total found in Step 4.
  7. A route corresponding to this minimum weight can then be easily found.

For the first steps, we can use the codes from Hurley & Oldford’s Eulerian tour algorithms for data visualization and the PairViz package. First, we have to load some R packages

require(igraph)
require(graph)
require(eulerian)
require(GA)

Then use the following function from stackoverflow,

make_eulerian = function(graph){
  info = c("broken" = FALSE, "Added" = 0, "Successfull" = TRUE)
  is.even = function(x){ x %% 2 == 0 }
  search.for.even.neighbor = !is.even(sum(!is.even(degree(graph))))
  for(i in V(graph)){
    set.j = NULL
    uneven.neighbors = !is.even(degree(graph, neighbors(graph,i))) 
if(!is.even(degree(graph,i))){ 
if(sum(uneven.neighbors) == 0){ 
if(sum(!is.even(degree(graph))) > 0){
          info["Broken"] = TRUE
          uneven.candidates <- !is.even(degree(graph, V(graph)))
          if(sum(uneven.candidates) != 0){
            set.j <- V(graph)[uneven.candidates][[1]]
          }else{
            info["Successfull"] <- FALSE
          }
        }       
      }else{
        set.j <- neighbors(graph, i)[uneven.neighbors][[1]]
      }
    }else if(search.for.even.neighbor == TRUE & is.null(set.j)){
      info["Added"] <- info["Added"] + 1     
      set.j <- neighbors(graph, i)[ !uneven.neighbors ][[1]]
      if(!is.null(set.j)){search.for.even.neighbor <- FALSE}
    }
    if(!is.null(set.j)){
      if(i != set.j){
        graph <- add_edges(graph, edges=c(i, set.j))
        info["Added"] <- info["Added"] + 1
      }
    }
  }
  (list("graph" = graph, "info" = info))}

Then, consider some network, with 12 nodes

g1 = graph(c(1,2, 1,3, 2,4, 2,5, 1,5, 3,5, 
4,7, 5,7, 5,8, 3,6, 6,8, 6,9, 9,11, 8,11, 
8,10, 8,12, 7,10, 10,12, 11,12), directed = FALSE)

To plot that network, use

V(g1)$name=LETTERS[1:12]
V(g1)$color=rgb(0,0,1,.4)
ly=layout.kamada.kawai(g1)
plot(g1,vertex.color=V(newg)$color,layout=ly)

Then we convert it to some traversable graph by adding 5 vertices

eulerian = make_eulerian(g1)
eulerian$info
     broken       Added Successfull 
          0           5           1 
g = eulerian$graph

as shown below

ly=layout.kamada.kawai(g)
plot(g,vertex.color=V(newg)$color,layout=ly)

We cut those 5 vertices in two part, and therefore, we add 5 artificial nodes

A=as.matrix(as_adj(g))
A1=as.matrix(as_adj(g1))
newA=lower.tri(A, diag = FALSE)*A1+upper.tri(A, diag = FALSE)*A
for(i in 1:sum(newA==2)) newA = cbind(newA,0)
for(i in 1:sum(newA==2)) newA = rbind(newA,0)
s=nrow(A)
for(i in 1:nrow(A)){
  Aj=which(newA[i,]==2)
  if(!is.null(Aj)){
      for(j in Aj){
        newA[i,s+1]=newA[s+1,i]=1
        newA[j,s+1]=newA[s+1,j]=1
        newA[i,j]=1
        s=s+1
      }}}

We get the following graph, where all nodes have an even number of vertices !

newg=graph_from_adjacency_matrix(newA)
newg=as.undirected(newg)
V(newg)$name=LETTERS[1:17]
V(newg)$color=c(rep(rgb(0,0,1,.4),12),rep(rgb(1,0,0,.4),5))
ly2=ly
transl=cbind(c(0,0,0,.2,0),c(.2,-.2,-.2,0,-.2))
for(i in 13:17){
  j=which(newA[i,]>0)
  lc=ly[j,]
  ly2=rbind(ly2,apply(lc,2,mean)+transl[i-12,])
}
plot(newg,layout=ly2)

Our network is now the following (new nodes are small because actually, they don’t really matter, it’s just for computational reasons)

plot(newg,vertex.color=V(newg)$color,layout=ly2,
     vertex.size=c(rep(20,12),rep(0,5)),
     vertex.label.cex=c(rep(1,12),rep(.1,5)))

Now we can get the optimal path

n <- LETTERS[1:nrow(newA)]
g_2 <- new("graphNEL",nodes=n) for(i in 1:nrow(newA)){ for(j in which(newA[i,]>0)){
    g_2 <- addEdge(n[i],n[j],g_2,1) 
  }}
etour(g_2,weighted=FALSE)
 [1] "A" "B" "D" "G" "E" "A" "C" "E" "H" "F" "I" "K" "H" "J" "G" "P" "J" "L" "K" "Q" "L" "H" "O" "F" "C"
[26] "N" "E" "B" "M" "A"

or

edg=attr(E(newg), "vnames")
ET=etour(g_2,weighted=FALSE)
parcours=trajet=rep(NA,length(ET)-1)
for(i in 1:length(parcours)){
  u=c(ET[i],ET[i+1])
  ou=order(u)
  parcours[i]=paste(u[ou[1]],u[ou[2]],sep="|")
  trajet[i]=which(edg==parcours[i])
}
parcours
 [1] "A|B" "B|D" "D|G" "E|G" "A|E" "A|C" "C|E" "E|H" "F|H" "F|I" "I|K" "H|K" "H|J" "G|J" "G|P" "J|P"
[17] "J|L" "K|L" "K|Q" "L|Q" "H|L" "H|O" "F|O" "C|F" "C|N" "E|N" "B|E" "B|M" "A|M"
trajet
 [1]  1  3  8  9  4  2  6 10 11 12 16 15 14 13 26 27 18 19 28 29 17 25 24  7 22 23  5 21 20

Let us try now on a real network of streets. Like Missoula, Montana.

I will not try to get the shapefile of the city, I will just try to replicate the photography above.

If you look carefully, you will see some problem : 10 and 93 have an odd number of vertices (3 here), so one strategy is to connect them (which explains the grey line).

But actually, to be more realistic, we start in 93, and we end in 10. Here is the optimal (shortest) path which goes through all vertices.

Now, we are ready for Halloween, to go through all streets in the neighborhood !

Game of Friendship Paradox

In the introduction of my course next week, I will (briefly) mention networks, and I wanted to provide some illustration of the Friendship Paradox. On network of thrones (discussed in Beveridge and Shan (2016)), there is a dataset with the network of characters in Game of Thrones. The word “friend” might be abusive here, but let’s continue to call connected nodes “friends”. The friendship paradox states that

People on average have fewer friends than their friends

This was discussed in Feld (1991) for instance, or Zuckerman & Jost (2001). Let’s try to see what it means here. First, let us get a copy of the dataset

download.file("https://www.macalester.edu/~abeverid/data/stormofswords.csv","got.csv")
GoT=read.csv("got.csv")
library(networkD3)
simpleNetwork(GoT[,1:2])

Because it is difficult for me to incorporate some d3js script in the blog, I will illustrate with a more basic graph,

Consider a vertex v\in V in the undirected graph G=(V,E) (with classical graph notations), and let d(v) denote the number of edges touching it (i.e. v has d(v) friends). The average number of friends of a random person in the graph is \mu = \frac{1}{n_V}\sum_{v\in V} d(v)=\frac{2 n_E}{n_V} The average number of friends that a typical friend has is
\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)But
\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)=\sum_{v,v' \in G} \left(<br /> \frac{d(v')}{d(v)}+\frac{d(v)}{d(v')}\right)=\sum_{v,v' \in G}\left(\frac{d(v')^2+d(v)^2}{d(v)d(v')}\right)=\sum_{v,v' \in G} \left(\frac{(d(v')-d(v))^2}{d(v)d(v')}+2\right){\color{red}{\succ}}\sum_{v,v' \in G} \left(2\right)=\sum_{v\in V} d(v)
Thus,\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)\succ \frac{1}{n_V}\sum_{v\in V} d(v)
Note that this can be related to the variance decomposition \text{Var}[X]=\mathbb{E}[X^2]-\mathbb{E}[X]^2i.e.\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]} =\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}\succ\mathbb{E}[X](Jensen inequality). But let us get back to our network. The list of nodes is

M=(rbind(as.matrix(GoT[,1:2]),as.matrix(GoT[,2:1])))
nodes=unique(M[,1])

and we each of them, we can get the list of friends, and the number of friends

friends = function(x) as.character(M[which(M[,1]==x),2])
nb_friends = Vectorize(function(x) length(friends(x)))

as well as the number of friends friends have, and the average number of friends

friends_of_friends = function(y) (Vectorize(function(x) length(friends(x)))(friends(y)))
nb_friends_of_friends = Vectorize(function(x) mean(friends_of_friends(x)))

We can look at the density of the number of friends, for a random node,

Nb  = nb_friends(nodes)
Nb2 = nb_friends_of_friends(nodes)
hist(Nb,breaks=0:40,col=rgb(1,0,0,.2),border="white",probability = TRUE)
hist(Nb2,breaks=0:40,col=rgb(0,0,1,.2),border="white",probability = TRUE,add=TRUE)
lines(density(Nb),col="red",lwd=2)
lines(density(Nb2),col="blue",lwd=2)


and we can also compute the averages, just to check

mean(Nb)
[1] 6.579439
mean(Nb2)
[1] 13.94243

So, indeed, people on average have fewer friends than their friends.

Classification from scratch, neural nets 6/8

Sixth post of our series on classification from scratch. The latest one was on the lasso regression, which was still based on a logistic regression model, assuming that the variable of interest Y has a Bernoulli distribution. From now on, we will discuss technique that did not originate from those probabilistic models, even if they might still have a probabilistic interpretation. Somehow. Today, we will start with neural nets.

Maybe I should start with a disclaimer. The goal is not to replicate well designed R functions, used for predictive modeling. It is simply to get a basic understanding of what’s going on.

Networs, nodes and edges

First of all, neurals nets are nets, or networks. I will skip the parallel with “neural” stuff because it does not help me understanding what is happening (all apologies for my poor knowledge on biology, and cells)

So, it’s about some network. Networks have nodes, and edges (possibly connected) that connect nodes,

or maybe, to more specific (at least it helped me understanding what’s going on), some sort of flow network,

In such a network, we usually have sources (here multiple) sources (here \color{red}\{s_1,s_2,s_3\}), on the left, on a sink (here \{\color{blue}t\}), on the right. To continue with this metaphorical introduction, information from the sources should reach the sink. An usually, sources are explanatory variables, \{\mathbf{x}_1,\cdots,\mathbf{x}_p\}, and the sink is our variable of interest \mathbf{y}. And we want to create a graph, from the sources to the sink. We will have directed edges, with only one (unique) direction, where we will put weights. It is not a flow, the parallel with flow will stop here. For instance, the most simple network will be the following one, with no layer (i.e no node between the source and the sink)

The output here is a binary variable y\in\{0,1\} (it can also be y\in\{-1,+1\} but here, it’s not a big deal). In our network, our output will be y\in(0,1), because it is more easy to handly. For instance, consider y=f(something), for some function f taking values in (0,1). One can consider the sigmoid functionf(x)=\frac{1}{1+e^{-x}}=\frac{e^{x}}{e^{x}+1}which is actually the logistic function (so we should not be surprised to have results somehow close the logistic regression…). This function f is called the activation function, and there are thousands of such functions. If y\in\{-1,+1\}, people consider the hyperbolic tangentf(x)=\tanh(x)={\frac {(e^{x}-e^{-x})}{(e^{x}+e^{-x})}}or the inverse tangent function
f(x)=\tan ^{-1}(x)And as input for such function, we consider a weighted sum of incoming nodes. So herey_i=f\left(\sum_{j=1}^p\omega_j x_{j,i}\right)We can also add a constant actuallyy_i=f\left(\omega_0+\sum_{j=1}^p\omega_j x_{j,i}\right)So far, we are not far away from the logistic regression. Except that our starting point was a probabilistic model, in the sense that the later was interpreted as a probability (the probability that Y=1) and we wanted the model with the highest likelihood. But we’ll talk about selection of weights later one. First, let us construct our first (very simple) neural network. First, we have the sigmoid function

sigmoid = function(x) 1 / (1 + exp(-x))

The consider some weights. In our model with seven explanatory variables, with need 7 weights. Or 8 if we include the constant term. Let us consider \mathbf{\omega}=\mathbf{1},

weights_0 = rep(1,8)
X = as.matrix(cbind(1,myocarde[,1:7]))
y_5_1 = sigmoid(X %*% weights_0)

that’s kind of stupid because all our predictions are 1, here. Let us try something else. Like \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. It is optimized, somehow, but we needed something to visualize what’s going on

weights_0 = lm(PRONO~.,data=myocarde)$coefficients

then use

y_5_1 = sigmoid(X %*% weights_0)

In order to see if we get a “good” prediction, let use plot the ROC curve, and compare it with the one we got with a (simple) logistic regression

library(ROCR)
pred = ROCR::prediction(y_5_1,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit"))
y_0 = predict(reg,type="response")
pred0 = ROCR::prediction(y_0,myocarde$PRONO)
perf0 = ROCR::performance(pred0,"tpr", "fpr")
plot(perf0,add=TRUE,col="red")


That’s not bad for a very first attempt. Except that we’ve been cheating here, since we did use \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. How, for real, should we choose those weights?

Using a loss function

Well, if we want an “optimal” set of weights, we need to “optimize” an objective function. So we need to quantify the loss of a mistake, between the prediction, and the observation. Consider here a quadratic loss function

loss = function(weights){
  mean( (myocarde$PRONO-sigmoid(X %*% weights))^2) }

It might be stupid to use a quadratic loss function for a classification, but here, it’s not the point. We just want to understand what is the algorithm we use, and the loss function \ell is just one parameter. Then we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,f(\omega_0+\mathbf{x}_i^T\mathbf{\omega})\right)\right\rbraceThus, consider

weights_1 = optim(weights_0,loss)$par

(where the starting point is the OLS estimate). Again, to see what’s going on, let us visualize the ROC curve

y_5_2 = sigmoid(X %*% weights_1)
pred = ROCR::prediction(y_5_2,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")


That’s not amazing, but again, that’s only a first step.

A single layer

Let us add a single layer in our network.

Those nodes are connected to the sources (incoming from sources) from the left, and then connected to the sink, on the right. Those nodes are not inter-connected. And again, for that network, we need edges (i.e series of weights). For instance, on the network above, we did add one single layer, with (only) three nodes.

For such a network, the prediction formula is \mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3\omega_h f_h\left(\omega_{h,0}+ \sum_{j=1}^p \omega_{h,j} x_j\right)\right)or more synthetically\mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)Usually, we consider the same activation function everywhere. Don’t ask me why, I find that weird.

Now, we have a lot of weights to choose. Let us use again OLS estimates

weights_1 &lt;- lm(PRONO~1+FRCAR+INCAR+INSYS+PAPUL+PVENT,data=myocarde)$coefficients
X1 = as.matrix(cbind(1,myocarde[,c("FRCAR","INCAR","INSYS","PAPUL","PVENT")]))
weights_2 &lt;- lm(PRONO~1+INSYS+PRDIA,data=myocarde)$coefficients
X2=as.matrix(cbind(1,myocarde[,c("INSYS","PRDIA")]))
weights_3 &lt;- lm(PRONO~1+PAPUL+PVENT+REPUL,data=myocarde)$coefficients
X3=as.matrix(cbind(1,myocarde[,c("PAPUL","PVENT","REPUL")]))

In that case, we did specify edges, and which sources (explanatory variables) should be used for each additional node. Actually, here, other techniques could be have been used, like using a PCA. Each node will then be one of the components. But we’ll use that idea later one…

X = cbind(sigmoid(X1 %*% weights_1), sigmoid(X2 %*% weights_2), sigmoid(X3 %*% weights_3))

But we’re not done here. Those were weights from the source to the know nodes, in the layer. We still need the weights from the nodes to the sink. Here, let use use a simple average

weights = c(1/3,1/3,1/3)
y_5_3 &lt;- sigmoid(X %*% weights)

Again, we can plot the ROC curve to see what we’ve done…

pred = ROCR::prediction(y_5_3,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf0,add=TRUE,col="red")

On back propagation

Now, we need some optimal selection of those weights. Observe that with only 3 nodes, there are already (7+1)\times3+3=27 parameters in that model! Clearly, parcimony is not the major issue when you start using neural nets! If p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,p(\mathbf{x}_i)\right)\right\rbracefor some loss function, which is\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i-p(\mathbf{x}_i))^2 \right\rbracefor the quadratic norm, or\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i\log p(\mathbf{x}_i)+[1-y_i]\log [1-p(\mathbf{x}_i)]) \right\rbraceif we want to use cross-entropy.

For convenience, let us center all the variable we create, otherwise, we get numerical problems.

center = function(z) (z-mean(z))/sd(z)
loss = function(weights){
weights_1 = weights[0+(1:7)]
weights_2 = weights[7+(1:7)]
weights_3 = weights[14+(1:7)]
weights_  = weights[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
mean( (myocarde$PRONO-sigmoid(X %*% weights_))^2)}

Now that we have our objective function, consider some starting points. We can consider weights from a PCA, and then use a gradient descent algorithm,

pca = princomp(myocarde[,1:7])
W = get_pca_var(pca)$contrib
weights_0 = c(W[,1],W[,2],W[,3],c(-1,rep(1,3)/3))
weights_opt = optim(weights_0,loss)$par

The prediction is then obtained using

weights_1 = weights_opt[0+(1:7)]
weights_2 = weights_opt[7+(1:7)]
weights_3 = weights_opt[14+(1:7)]
weights_  = weights_opt[21+1:4]
X1=X2=X3=as.matrix(myocarde[,1:7])
Z1 = center(X1 %*% weights_1)
Z2 = center(X2 %*% weights_2)
Z3 = center(X3 %*% weights_3)
X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3))
y_5_4 = sigmoid(X %*% weights_)

And as previously, why not plot the ROC curve of that model

pred = ROCR::prediction(y_5_4,myocarde$PRONO)
perf = ROCR::performance(pred,"tpr", "fpr")
plot(perf,col="blue",lwd=2)
plot(perf,add=TRUE,col="red")


That’s not too bad. But with 27 coefficients, that’s what we would expect, no?

Using nnet() function

That’s more or less what is done in neural nets functions. Let us now have a look at some dedicated R functions.

library(nnet)
myocarde_minmax = myocarde
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:7) myocarde_minmax[,j] = minmax(myocarde_minmax[,j])

Here, variables are linearly transformed, to take values in (0,1). Then we can construct a neural network with one single layer, and three nodes,

model_nnet = nnet(PRONO~.,data=myocarde_minmax,size=3)
summary(model_nnet)
a 7-3-1 network with 28 weights
options were -
 b-&gt;h1 i1-&gt;h1 i2-&gt;h1 i3-&gt;h1 i4-&gt;h1 i5-&gt;h1 i6-&gt;h1 i7-&gt;h1 
 -9.60  -1.79  21.00  14.72 -20.45  -5.05  14.37 -17.37 
 b-&gt;h2 i1-&gt;h2 i2-&gt;h2 i3-&gt;h2 i4-&gt;h2 i5-&gt;h2 i6-&gt;h2 i7-&gt;h2 
  4.72   2.83  -3.37  -1.64   1.49   2.12   2.31   4.00 
 b-&gt;h3 i1-&gt;h3 i2-&gt;h3 i3-&gt;h3 i4-&gt;h3 i5-&gt;h3 i6-&gt;h3 i7-&gt;h3 
 -0.58  -6.03  25.14  18.03  -1.19   7.52 -19.47 -12.95 
  b-&gt;o  h1-&gt;o  h2-&gt;o  h3-&gt;o 
 -1.32  29.00 -10.32  26.27

Here, it is the complete full network. And actually, there are (online) some functions that can he used to visualize that network

library(devtools)
source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r')
plot.nnet(model_nnet)


Nice, isn’t it? We clearly see the intermediary layer, with three nodes, and on top the constants. Edges are the plain lines, the darker, the heavier (in terms of weights).

Using neuralnet()

Other R functions can actually be considered.

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)


Again, for the same network structure, with one (hidden) layer, and three nodes in it.

Network with multiple layers

The good thing is that it’s not possible to add more layers. Like two layers. Nodes from the first layer are no longuer connected with the sink, but with nodes in the second layer. And those nodes will then be connected to the sink. We now have something like
p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_h f_h\left(\omega_{h,0}+ \mathbf{z}_h^T\mathbf{\omega}_h\right)\right)where\mathbf{z}_h=f\left( \omega_{h,0}+ \sum_{j=1}^{k_h} \omega_{h,j} f_{h,j}\left(\omega_{h,j,0}+ \mathbf{x}^T\mathbf{\omega}_{h,j}\right)\right)I may be rambling here (a little bit) but that’s a lot of parameters. Here is the visualization of such a network,

library(neuralnet)
model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)),
myocarde_minmax,hidden=3, act.fct = sigmoid)
plot(model_nnet)

Application

Let us get back on our simple dataset, with only two covariates.

library(neuralnet)
df_minmax =df
df_minmax$y=(df_minmax$y=="1")*1
minmax = function(z) (z-min(z))/(max(z)-min(z))
for(j in 1:2) df_minmax[,j] = minmax(df[,j])
X = as.matrix(cbind(1,df_minmax[,1:2]))

Consider only one layer, with two nodes

model_nnet = neuralnet(formula(lm(y~.,data=df_minmax)),
df_minmax,hidden=c(2))
plot(model_nnet)


Here, we did not specify it, but the activation function is the sigmoid (actually, it is called logistic here)

model_nnet$act.fct
function (x) 
{
    1/(1 + exp(-x))
}
 
attr(,"type")
[1] "logistic"
f=model_nnet$act.fct

The weights (on the figure) can be obtained using

w0 = model_nnet$weights[[1]][[2]][,1]
w1 = model_nnet$weights[[1]][[1]][,1]
w2 = model_nnet$weights[[1]][[1]][,2]

Now, to get our prediction,
we should usep(\mathbf{x})=f\left( \omega_0+ \omega_1 f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_1 f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})\right)which can be obtained using

f(cbind(1,f(X%*%w1),f(X%*%w2))%*%w0)
              [,1]
 [1,] 0.7336477343
 [2,] 0.7317999050
 [3,] 0.7185803540
 [4,] 0.7404005280
 [5,] 0.7518482779
 [6,] 0.4939774149
 [7,] 0.4965876378
 [8,] 0.7101714888
 [9,] 0.5050760026
[10,] 0.5049877644

Unfortunately, it is not the output of the model here,

neuralnet::prediction(model_nnet)
Data Error:	0;
$rep1
       x1           x2              y
1  0.1250 0.0000000000  0.02030470787
2  0.0625 0.1176470588  0.89621706711
3  0.9375 0.2352941176  0.01995171956
4  0.0000 0.4705882353  1.10849420363
5  0.5000 0.4705882353 -0.01364966058
6  0.3125 0.5294117647 -0.02409150561
7  0.6875 0.8235294118  0.93743057765
8  0.3750 0.8823529412  1.01320924782
9  1.0000 0.9058823529  1.04805134309
10 0.5625 1.0000000000  1.00377379767

If anyone has a clue, I’d be glad to know what went wrong here… I find that odd to have outputs outside the (0,1) interval, but the output is neitherp(\mathbf{x})=\omega_{0,0}+ \omega_{0,1} f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_{0,2} f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})

cbind(1,f(X%*%w1),f(X%*%w2))%*%w0
                [,1]
 [1,]  1.01320924782
 [2,]  1.00377379767
 [3,]  0.93743057765
 [4,]  1.04805134309
 [5,]  1.10849420363
 [6,] -0.02409150561
 [7,] -0.01364966058
 [8,]  0.89621706711
 [9,]  0.02030470787
[10,]  0.01995171956

(to be continued…)

Traffic Flow of Kota Kinabalu (with R)

This morning, we had our first practicals on network flows, using  an example mentioned in some papers published by Noraini Abdullah and Ting Kien Hua, max flow min cut theorem to minimize traffic congestion in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu. From the roads mentioned in the articles, I did try my best to locate the nodes on a map,

m=matrix(c(0,5.995910, 116.105520,
1,5.992737, 116.093718,
2,5.992066, 116.109883,
3,5.976947, 116.095760,
4,5.985766, 116.091580,
5,5.988940, 116.080112,
6,5.968318, 116.080764,
7,5.977454, 116.075460,
8,5.974226, 116.073604,
9,5.969651, 116.073753,
10,5.972341, 116.069270,
11,5.978818, 116.072880),3,12)

we can be visualized below

library(OpenStreetMap)
map = openmap(c(lat= 6.000, lon= 116.06),
c(lat= 5.960, lon= 116.12))
map=openproj(map)
plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
text(t(m[3:2,]),c("s",1:10,"t"),col="white")

If the source is realistic (up north), I do not feel very confortable with the location of the sink (on the west). But let’s pretend it’s find (to do the maths, at least).

To extract information about edge capacity, on that network use the following code that will extract the three tables from the paper

library(devtools)
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)

with Windows, it seems to be necessary to download another package first

library(devtools)
install_github("ropensci/tabulizerjars")
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)

Now we can get out data frame with capacities

B1=as.data.frame(out[[2]])
B2=as.data.frame(out[[3]])
E=data.frame(from=B1[3:20,"V3"],
to=B1[3:20,"V4"])
E=E[-c(6,8),]
capacity=as.character(B2$V3[-1])
capacity[6]="843"
capacity[4]="2913"
E$capacity=as.numeric(capacity)

We can add those edges on our map (without the arrows to indicate the direction, it would be to heavy to read)

plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
B=data.frame(i=as.character(c("s",paste("V",1:10,sep=""),"t")),
x=m[3,],y=m[2,])
for(i in 1:nrow(E)){
i1=which(B$i==as.character(E$from[i]))
i2=which(B$i==as.character(E$to[i]))
segments(B[i1,"x"],B[i1,"y"],B[i2,"x"],B[i2,"y"],lwd=3)
}
text(t(m[3:2,]),c("s",1:10,"t"),col="white")

To get the graph with capacities, an alternative is to use

library(igraph)
g=graph_from_data_frame(E)
E(g)$label=E$capacity
plot(g)

but it does not respect geographical locations of nodes. It can actually be done using

plot(g, layout=as.matrix(B[,c("x","y")]))

To get a better understanding of the capacities of the road, use

plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$capacity/200)

From that network with capacities, the goal is to determine maximum flow on that network, from the source to the sink. This can be done with R using

> (m=max_flow(graph=g, source="s", target="t"))
$value
[1] 2571

$flow
[1] 1191 1380 1422 1380 231 0 231 0 1149 1422 1149 0 0 1149 1422
[16] 1149

Our maximum flow is here 2571, which is different from was is actually claimed both in the two papers  max flow min cut theorem to… and application of the Shortest Path… (“the maximum flow for the capacitated network with 12 nodes and 16 edges of the selected scope in this study was 2598 vehicles per hour“) where there are clearly typos since values in the table and on the graph are different. Here I did use the ones from the tables.

E$flux1=m$flow
E(g)$label=E$flux1
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux1/200)

That is nice, but rather odd. Actually, a much simpler flow can be considered, but the same global value

E$flux2=c(1422,1149,1422,1149,0,0,0,0,
1149,1422,1149,0,0,1149,1422,1149)
E(g)$label=E$flux2
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux2/200)

Nice, isn’t it. It is actually possible to do exactly the same on another paper they have, on the same city, traffic congestion problem of road networks in Kota Kinabalu.

location <- 'http://www.worldresearchlibrary.org/up_proc/pdf/999-150486366625-30.pdf'
out <- extract_tables(location)
dim(out[[3]])
B1=as.data.frame(out[[3]])
E=data.frame(from=B1[2:61,"V2"],
to=B1[2:61,"V3"],
capacity=B1[2:61,"V4"])
E$capacity=as.numeric(
as.character(E$capacity))
library(igraph)
g=graph_from_data_frame(E)
m=max_flow(graph=g,
source="S",
target="T")
E$flux1=m$flow
E(g)$label=E$flux1
plot(g,
edge.width=E$flux1/200,
edge.arrow.size=0.15)

Here the value of the maximal flow is 4017, just as they found in the original paper

Traffic Flow of Kota Kinabalu (Malaysia)

For the second practicals of our course on networks and flows, we will study traffic flow of Kota Kinabalu (Malaysia), following several papers published by Noraini Abdullah and Ting Kien Hua, such as max flow min cut theorem to minimize traffic congestion in Kota Kinabalu, traffic congestion problem of road networks in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu.

 

Métro: centralité et robustesse

Demain matin, nous aurons un TP pour le cours sur les réseaux, les flux et les transports. En particulier, en nous inspirant des travaux de Sybil Derrible, nous allons commencer par étudier la centralité dans les différents systèmes de métro, mais aussi la robustesse. Les matrices d’adjacence d’une trentaine de métros dans le monde sont en ligne dans un fichier xls. Histoire de gagner un peu de temps, le code pour créer une matrice d’adjacence peut être le suivant

loc="/data/Metro_Networks_Adjacency.xls"
library(xlsx)
E=read.xlsx(loc,"StPetersburg")
n=nrow(E)
nom=as.character(E[3:(n-2),1])
Adj=E[3:(n-2),(4:ncol(E)-1)]
Adj[is.na(Adj)]=0
Adj=as.matrix(Adj)
colnames(Adj)=rownames(Adj)=nom

On est ensuite prêt à manipuler le réseau,

library(igraph)
iflo=graph_from_adjacency_matrix(Adj,mode = "undirected")
plot(iflo)

On va utiliser les notions vues en cours, sur la centralité, mais surtout, on travaillera sur Quantifying the robustness of metro networks, inspiré de The complexity and robustness of metro networks. Plusieurs fonctions utiles sont déjà programmées dans R, comme l’assortativité.

 

 

Networks with R

In order to practice with network data with R, we have been playing with the Padgett (1994) Florentine’s wedding dataset (discussed in the lecture). The dataset is available from

> library(network)
> data(flo)
> nflo=network(flo,directed=FALSE)
> plot(nflo, displaylabels = TRUE,
+ boxed.labels =
+ FALSE)

The next step was to move from the network package to igraph. Since we have the adjacency matrix, we can use it

> library(igraph)
> iflo=graph_from_adjacency_matrix(flo,
+ mode = "undirected")
> plot(iflo)

The good thing is that a lot of functions are available, for instance we can get shortest paths, between two specific nodes. And we can give appropriate colors to the nodes that we’ll cross

> AP=all_shortest_paths(iflo,
+ from="Peruzzi",
+ to="Ginori")
> L=AP$res[[1]]
> V(iflo)$color="yellow"
> V(iflo)$color[L[2:4]]="light blue"
> V(iflo)$color[L[c(1,5)]]="blue"
> plot(iflo)

We can also visualize edges, but I found it slightly more complicated (to extract edges from the output)

> liens=c(paste(as.character(L)[1:4],
+ "--",
+ as.character(L)[2:5],sep=""),
+ paste(as.character(L)[2:5],
+ "--",
+ as.character(L)[1:4],sep=""))
> df=as.data.frame(ends(iflo,E(iflo)))
> names(df)=c("src","target")
> lstn=sort(unique(c(as.character(df[,1]),as.character(df[,2]),"Pucci")))
> Eliens=paste(as.numeric(factor(df[,1],levels=lstn)),"--",
+ as.numeric(factor(df[,2],levels=lstn)),sep="")
> EU=unlist(lapply(Eliens,function(x) x%in%liens))
> E(iflo)$color=c("grey","black")[1+EU]
> plot(iflo)

But it works. It is also possible to use some D3js visualization

> library( networkD3 )
> simpleNetwork (df)

Then the next question was to add a vertice to the network. The most simple way to do it is probability through the adjacency matrix

> flo2=flo
> flo2["Pucci","Bischeri"]=1
> flo2["Bischeri","Pucci"]=1
> nflo2=network(flo2,directed=FALSE)
> plot(nflo2, displaylabels = TRUE,
+ boxed.labels =
+ FALSE)

Then, we’ve been playing with centrality measures.

> plot(iflo,vertex.size=betweenness(iflo))

The goal was to see how related they were. Here, for all of them, “Medici” is the central node. But what about the others?

> B=betweenness(iflo)
> C=closeness(iflo)
> D=degree(iflo)
> E=eigen_centrality(iflo)$vector
> base=data.frame(betw=B,close=C,deg=D,eig=E)
> cor(base)
betw close deg eig
betw 1.0000000 0.5763487 0.8333763 0.6737162
close 0.5763487 1.0000000 0.7572778 0.7989789
deg 0.8333763 0.7572778 1.0000000 0.9404647
eig 0.6737162 0.7989789 0.9404647 1.0000000

Those measures are quite correlated. It is also possible to use a hierarchical graph to visualize how close those centrality measures can be

> H=hclust(dist(t(base)),
+ method="ward")
> plot(H)

Instead of looking at values of centrality measures, it is possible to looks are ranks

> rbase=base
> for(i in 1:4) rbase[,i]=rank(base[,i])
> H=hclust(dist(t(rbase)),
+ method="ward")
> plot(H)

Here the eigenvector measure is very close to the degree of vertices.

Finally, it is possible to seek clusters (in the context of coalition here, in case a war should start between those families)

> kc <- fastgreedy.community ( iflo )

Here we have 3 classes (+1 for the node that is disconnected from the other families)

> V(iflo)$color=c("yellow","orange",
+ "light blue")[membership ( kc )]
> plot(iflo)

> plot(kc,iflo)