Tag Archives: network
Assurance collaborative, théorie des graphes et actuariat
Mardi prochain, j’interviendrais (en visio) au colloque SCOR sur le thème “Actuariat, effets réseaux et théorie des graphes“
Mes slides (on m’a demandé de parler sur le thème assurance collaborative, théorie des graphes et actuariat) sont en ligne, je présenterai notre papier Collaborative Insurance Sustainability and Network Structure, mais je peux en profiter pour mentionner d’autres articles, dont modéliser la contagion, ou les réseaux pour réinventer l’assurance?
Collaborative Insurance Sustainability and Network Structure
A second version of Collaborative Insurance Sustainability and Network Structure is now available on ArXiv,
The peer-to-peer (P2P) economy has been growing with the advent of the Internet, with well known brands such as Uber or Airbnb being examples thereof. In the insurance sector the approach is still in its infancy, but some companies have started to explore P2P-based collaborative insurance products (eg. Lemonade in the U.S. or Inspeer in France). The actuarial literature only recently started to consider those risk sharing mechanisms, as in Denuit and Robert (2021) or Feng et al. (2021). In this paper, describe and analyse such a P2P product, with some reciprocal risk sharing contracts. Here, we consider the case where policyholders still have an insurance contract, but the first self-insurance layer, below the deductible, can be shared with friends. We study the impact of the shape of the network (through the distribution of degrees) on the risk reduction. We consider also some optimal setting of the reciprocal commitments, and discuss the introduction of contracts with friends of friends to mitigate some possible drawbacks of having people without enough connections to exchange risks.
Webinaire sur l’assurance collaborative et la théorie des grahs
Le mardi 28 janvier, avec Christian Yann Robert, on donnera un exposé à Paris sur le thème “Assurance collaborative: Théorie des Graphes, Machine Learning et Actuariat“, à l’Institut Louis Bachelier et l’Institut des Actuaires, à Paris
L’assurance collaborative est une nouvelle forme d’assurance et de relation entre assureur et assurés qui met à profit les nouvelles technologies pour une plus grande implication des assurés dans le processus et le modèle économique. Par la transparence elle vise à instaurer une relation de confiance forte entre les acteurs. Elle peut prendre des formes différentes, allant de la simple redistribution des profits jusqu’à la gestion en peer to peer par les assurés, rassemblés en communautés d’intérêt. Plusieurs tentatives ont été faites depuis 5 ou 6 ans dans différents pays: certaines en Europe ont échoué, d’autres se rôdent.
Aux Etats-unis, Lemonade est une vraie réussite. La répartition de la prime est de 20% pour la gestion et la rémunération de l’entreprise, 40% pour les sinistres (avec une redistribution en cas de bénéfice technique) et 40% pour la réassurance des gros sinistres. Cette nouvelle forme de mutualisation fait émerger de nouvelles problématiques tarifaires et de provisionnement, selon le mode de constitution des communautés d’assurés. La théorie des graphes aide à comprendre la capacité de mutualisation selon la structure des communautés.
Insurance data science : Networks
At the Summer School of the Swiss Association of Actuaries, in Lausanne, I will start talking about networks and insurance this Friday. Slides are available online
Networks to reinvent insurance?
The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.
Network and credit
In genealogy, we will have hierarchical networks, a child being linked to his parents, who are themselves linked to their parents. In sociology, social networks make it possible to analyze the links between individuals (or organizations) within a group. Friendships can be studied in a schoolyard (a link that could be an invitation to a birthday party) or e-mail exchanges in a company (the Enron e-mail database has been widely used, with over 180,000 messages exchanged between 36,000 employees). Figure 1 shows two networks of 20 individuals (A, B, …, T).
Figure 1: Random networks, 20 nodes (Watts-Strogatz and Barbasi)
In a Facebook or Linkedin type vision, we will say that E and F are linked, in the sense of “friends”, if there is a segment linking points E and F. A network can be directed, for example if we study the exchange of messages (E wrote to F), or money loans (E lent money to F). If historically only adjacency was studied (existence or not of links), we can now add weights, for example the amount of a financial loan. Babutsidze (2012) thus studies the positions of French and German banks in interbank lending within the European zone (the nodes are then the banks). The study of networks within village communities in developing countries has led to a better understanding of informal finance mechanisms. Banerjee et al (2013) study the dissemination of information in a network, and more particularly microfinance loans.
While networks are useful for better organizing microcredit, CNN noted in 2015 that Facebook allowed credit organizations to use a borrower’s social network to determine whether or not it represents a good credit risk. In particular, if friends’ credit scores were too low, a person could be denied credit. This situation is dangerous because of the particular properties of networks, and more particularly the paradox of friends.
From the very small world to the paradox of friends
In 1929, Frigyes Karinthy hypothesized that any person on earth could be connected to any other person by a succession of individual relationships involving at most 6 links. “We should select anyone from the world’s 1.5 billion people, anyone, anywhere. It seems that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the chosen individuals using nothing other than the network of personal acquaintances. This theory of six handshakes originated in a new literary novel. It will be necessary to wait for the work of Michael Gurevich in the 1960s, then Stanley Milgram ten years later, to see the first attempts to quantify these relationships appear, under the name “Small World Problem”.
While Leskovec & Horvitz (2008) confirmed this order of magnitude, by analyzing several billion messages exchanged using the Windows Live Messenger platform, more recently, Baghat et al (2016) estimated that any two people on Facebook were connected by an average of three and a half people. On the random network on the left, a person has, on average, 2 friends, while a random friend has, on average, 2.25 friends. On the right-hand network, the gap is even greater, because if there too a person has, on average, 2 friends, a random friend will have on average more than 4 friends.
Figure 2: Random networks, 500 nodes (Watts-Strogatz and Barbasi)
This paradox, observed in 1991 by sociologist Scott Feld, is very easily demonstrated. Heuristically, we can see a link with the probabilistic property \frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X] where the term on the left is the number of friends of my friends, divided by my number of friends. The difference is all the greater the greater the dispersion of the number of friends. If the left-hand network is very dense, the right-hand network on the other hand has a power law property: the distribution of the number of friends follows a power law (or Zipf law, or Pareto’s law). Figure 3 shows the distribution of the number of friends on a network, in a double logarithmic scale: linearity indicates a distribution according to power. This type of distribution can be found in a very large number of networks, particularly Facebook, as shown by Wohlgemuth & Matache (2014).
Figure 3: Distribution of the number of friends on simulated random networks (Watts-Strogatz and Barbasi in red)
The classic interpretation is that some people are central in the network, with a very large number of connections. This property is well known in marketing (we will then speak of a “peer effect“) but it also has impacts in risk management or public health. Chrisakis & Fowler (2010) have shown that influenza epidemics can be detected almost two weeks in advance, by monitoring the infection in a social network. In particular, the analysis of the health of central people in a network is “an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly”. To return to the example of the credit score, if it is found to be correlated to the number of friends, the friends paradox makes it dangerous to use the friends’ score as an indicator of an individual’s risk!
The importance of homophilia
Another important feature of networks is the notion of homophilia, introduced in 2001 in sociology by two important articles, corresponding to the tendency to be connected to one’s peers. McPherson et al (2001) assumed that similarity generates connection, and therefore people’s personal networks are homogeneous across many socio-demographic, behavioural and intrapersonal characteristics. Moody (2001) studied friendships in elementary school playgrounds in the United States, with a focus on interracial friendships. Easley & Kleinberg (2010) thus presents a number of consequences of homophilia, ranging from the creation of tables at business meals to the granting of credit in the United States. The measurement of homophilia is the same as asking, taking into account pre-existing groups (according to gender, age, socio-professional category, etc.) how the links are distributed, between groups, or within groups.
Figure 4: Low homophilia (left) and high homophilia (right)
In an insurance context, an actuary seeks to create tariff classes, groups that are homogeneous in terms of risks, according to explanatory variables (the so-called tariff variables). People who live in the same place, drive the same types of vehicles, and have the same characteristics, are likely to be in the same class. But if homophilia exists in a population, a tariff group could perhaps be observed on a network of friends. Why not then consider creating groups within a network?
Using insurance networks
In this spirit, in 2010, Friendsurance was launched in Germany and has more than 100,000 insured in 2018. In France, a short collaborative insurance experiment was launched in 2015, with Inspeer, offering to share damage insurance deductibles (in car or home insurance) with friends. These types of collaborative insurance, sometimes called peer-to-peer insurance, are based on the formation of small groups by a broker. A portion of the insurance premiums paid is paid to a group fund, the other portion to a third party insurance company. Minor damage suffered by the policyholder is first covered by this group fund. For claims exceeding the deductible, the usual insurer is used. A group can be formed by the insured, forming a social network a bit like Facebook. In this model, the only requirement is that all group members must have the same type of insurance (e. g. liability insurance with legal expenses insurance).
As Schiller (2013) noted, this type of mechanism has many virtues, the first being to reduce costs and the risk of fraud. There is no tendency to cheat on the cost of a claim when the risk is borne by family members or friends. The anonymity of mutuality that exists in the law of large numbers is disappearing. But aren’t we reinventing version 2.0. of the tontine associations, with the strong return of risk sharing within close-knit communities?
References
Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014
Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.
Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,
Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.
Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,
Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015
David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.
Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.
Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.
Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.
Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.
James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.
Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.
Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,
Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,
Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group
Networks. Complex Systems, 23 (2014).
[i] Complete data can be downloaded from https://snap.stanford.edu/data/email-Enron.html
[ii] https://www.friendsurance.com/ and https://www.inspeer.me/ respectively
Les réseaux pour réinventer l’assurance ?
La théorie des réseaux, ou des graphs, est née en 1735, suite aux travaux de Léonard Euler, qui essayait de trouver une promenade – à partir d’un point donné – qui fasse revenir à ce point en passant une fois et une seule par chacun des sept ponts de la ville de Königsberg. On peut rapprocher ces réseaux des réseaux de métro, constitués de stations (les nœuds), liés entre deux par des rails, ou pas, ou plus généralement un réseau routier, pouvant donner lieu à des études de congestion, par exemple. Mais les réseaux sont aujourd’hui surtout sociaux, reliant les personnes, par des liens d’amitiés, professionnels, familiaux, ou monétaires. L’analyse des réseaux permet de créer des communautés relativement homogène, acceptant de partager un risque, recréant une mutualisation.
Réseau et crédit
En généalogie, on aura des réseaux hiérarchiques, un enfant étant lié à ses parents, eux-mêmes reliés à leurs parents. En sociologie, les réseaux sociaux permettent d’analyser les liens entre des individus (ou des organisations) au sein d’un ensemble. On pourra étudier les amitiés dans une cour d’école (un lien pouvant être une invitation à un anniversaire) ou des échanges de messages électroniques dans une entreprise (la base des courriels d’Enron a ainsi été abondamment utilisée, avec plus de 180 000 messages échangés entre 36 000 employés[i]). La Figure 1 montre ainsi deux réseaux de 20 individus (A, B, …, T).
Figure 1 : réseaux aléatoires, 20 nœuds (Watts-Strogatz et Barbasi),
Dans une vision de type Facebook ou Linkedin, on dira que E et F sont liés, au sens « amis », s’il existe un segment reliant les points E et F. Un réseau peut être dirigé, par exemple si on étudie les échanges de messages (E a écrit à F), ou des prêts d’argent (E a prêté de l’argent à F). Si historiquement seule l’adjacence était étudiée (existence ou non de liens), on peut aujourd’hui rajouter des poids, par exemple le montant d’un prêt financier. Babutsidze (2012) étudie ainsi les positions de banques françaises et allemandes dans les prêts interbancaires au sein de la zone Europe (les nœuds sont alors les banques). L’étude des réseaux au sein de communautés de villages dans les pays en développement a permis de mieux comprendre les mécanismes de finance informelle. Banerjee et al. (2013) étudient ainsi la diffusion de l’information dans un réseau, et plus particulièrement les prêts de microfinance.
Si les réseaux sont utiles pour mieux organiser le microcrédit, CNN notait en 2015 que Facebook permettait à des organismes de crédit d’utiliser le réseau social d’un emprunteur pour déterminer s’il représente un bon risque de crédit, ou pas. En particulier, si le score de crédit des amis était trop faible, une personne pouvait se voir refuser un crédit. Cette situation est dangereuse à cause de propriétés particulières des réseaux, et plus particulièrement le paradoxe des amis.
Du tout petit monde au paradoxe des amis
En 1929, Frigyes Karinthy a émis l’hypothèse que toute personne sur terre pouvait être relliée à n’importe quelle autre par une succession de relations individuelles comprenant au plus 6 maillons. « Nous devrions sélectionner n’importe quelle personne du 1,5 milliard d’habitants de la planète, n’importe qui, n’importe où. Il paraît que, n’utilisant pas plus de cinq individus, l’un d’entre eux étant une connaissance personnelle, il pourrait contacter les individus choisis en n’utilisant rien d’autre que le réseau des connaissances personnelles ». Cette théorie des six poignées de main a vu son origine dans une nouvelle littéraire. Il faudra attendre les travaux de
Michael Gurevich dans les années 60, puis Stanley Milgram dix ans après, pour voir apparaître les premières tentatives de quantification de ces relations, sous le nom de « Small World Problem ». Si Leskovec & Horvitz (2008) ont confirmé cet ordre de grandeur, via l’analyse de plusieurs milliards de messages échangés à l’aide de la plateforme Windows Live Messenger, plus récemment, Baghat et al. (2016) ont estimé que deux personnes quelconques sur Facebook étaient connectées par une moyenne de trois personnes et demi. Sur le réseau aléatoire de gauche, une personne a, en moyenne, 2 amis, alors qu’un ami pris au hasard a en moyenne 2.25 amis. Sur le réseau de droite, l’écart est encore plus important, car si là aussi une personne a, en moyenne, 2 amis, un ami pris au hasard aura en moyenne plus de 4 amis.
Figure 2 : réseaux aléatoires, 500 nœuds (Watts-Strogatz et Barbasi)
Ce paradoxe, observé en 1991 par le sociologue Scott Feld se démontre très facilement. Heuristiquement, on peut voir un lien avec la propriété probabiliste\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X]où le terme de gauche est le nombre d’amis de mes amis, divisé par mon nombre d’amis. La différence est d’autant plus grande que la dispersion du nombre d’amis est importante. Si le réseau de gauche est très dense, celui de droite en revanche possède une propriété de loi puissance : la distribution du nombre d’amis suit une loi en fonction puissance (ou loi de Zipf, ou de Pareto). La Figure 3 montre la distribution du nombre d’amis sur un réseau, dans double échelle logarithmique : la linéarité indique une distribution en fonction puissance. On retrouve ce genre de distribution dans un très grand nombre de réseaux, en particulier Facebook, comme l’a montré Wohlgemuth & Matache (2014).
Figure 3 : distribution du nombre d’amis sur des réseaux aléatoires simulés (Watts-Strogatz et Barbasi en rouge)
L’interprétation classique est que certaines personnes sont centrales dans le réseau, avec un très grand nombre de connexions. Cette propriété est très connue en marketing (on parlera alors d’effet de pair, « peer effect ») mais elle a des impacts aussi en gestion des risques, ou en santé publique. Chrisakis & Fowler (2010) ont ainsi montré que les épidémies de grippe peuvent être détectées près de deux semaines en avance, en surveillant l’infection dans un réseau social. En particulier, l’analyse de la santé des personnes centrales dans un réseau est « an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly ». Pour revenir à l’exemple du score de crédit, si ce dernier se trouve être corrélé aux nombres d’amis, le paradoxe des amis rend dangereuse l’utilisation du score des amis comme indicateur du risque d’un individu !
L’importance de l’homophilie
Un autre trait important des réseaux est la notion d’homophilie, introduite en 2001 en sociologie par deux articles importants, correspondant à la tendance à être connecté à ses semblables. McPherson et al (2001) partait du principe que la similitude engendre la connexion, et par conséquent, les réseaux personnels des gens sont homogènes sur de nombreuses caractéristiques sociodémographiques, comportementales et intrapersonnelles. Moody (2001) étudiait ainsi les amitiés dans les cours de récréations à l’école primaire, aux États-Unis, et plus particulièrement les amitiés interraciales. Easley & Kleinberg (2010) présente ainsi de nombres conséquences de l’homophilie, allant de la constitution des tables lors de repas d’affaire, à l’attribution de crédit aux États-Unis. La mesure de l’homophilie revient à se demander, compte tenu de groupes préexistants (en fonction du genre, de l’âge, de la catégorie socioprofessionnelle, etc) comment se répartissent les liaisons, entre les groupes, ou à intérieur des groupes.
Figure 4 : faible homophilie (en haut) et forte homophilie (en bas).
Dans un contexte d’assurance, un actuaire cherche à créer des classes tarifaires, des groupes homogènes en termes de risques, suivant des variables explicatives (les variables dites tarifaires). Les personnes qui habitent au même endroit, qui conduisent les mêmes types de véhicules, et qui ont les mêmes caractéristiques, auront de fortes chances d’être dans la même classe. Mais si l’homophilie existe dans une population, un groupe tarifaire pourrait peut-être s’observer sur réseau d’amis. Pourquoi ne pas alors envisager de créer des groupes au sein d’un réseau ?
Utiliser les réseaux en assurance
Dans cet esprit, en 2010, Friendsurance a été lancé en Allemagne et compte en 2018 plus de 100000 assurés[i]. En France, une courte expérience d’assurance collaborative avait été lancée en 2015, avec Inspeer, proposant de mutualiser avec ses proches les franchises d’assurance dommage (en assurance auto ou habitation) entre amis. Ces types d’assurances collaboratives, parfois appelés assurance pair à pair – ou « peer-to-peer insurance » – reposent sur la constitution de petits groupes par un courtier. Une partie des primes d’assurance versées est versée à un fonds collectif, l’autre partie à une compagnie d’assurance tierce. Les dommages mineurs subis par le preneur d’assurance sont d’abord pris en charge par ce fonds de groupe. Pour les sinistres dépassant la franchise, il est fait appel à l’assureur habituel. Un groupe peut être constitué par les assurés, formant un réseau social un peu comme Facebook. Dans ce modèle, la seule exigence est que tous les membres du groupe doivent avoir le même type d’assurance (par exemple une assurance responsabilité civile avec une assurance des frais juridiques).
Comme le notait Schiller (2013), ce type de mécanisme a beaucoup de vertus, la première étant de diminuer les coûts, et le risque de fraude. On n’a en effet pas tendance à tricher sur le coût d’un sinistre lorsque le risque est porté par des membres de la famille, ou des amis. L’anonymat de la mutualité qui existe dans la loi des grands nombres disparait. Mais n’est-on pas en train de réinventer la version 2.0. des associations tontinières, avec le retour en force de la mutualisation des risques au sein de communautés soudées ?
Références
Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014
Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.
Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,
Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.
Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,
Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015
David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.
Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.
Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.
Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.
Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.
James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.
Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.
Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,
Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,
Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group
Networks. Complex Systems, 23 (2014).
[i] Les données complètes sont en ligne sur https://snap.stanford.edu/data/email-Enron.html
[ii] https://www.friendsurance.com/ et https://www.inspeer.me/ respectivement
Solving the chinese postman problem
Some pre-Halloween post today. It started actually while I was in Barcelona : kids wanted to go back to some store we’ve seen the first day, in the gothic part, and I could not remember where it was. And I said to myself that would be quite long to do all the street of the neighborhood. And I discovered that it was actually an old problem. In 1962, Meigu Guan was interested in a postman delivering mail to a number of streets such that the total distance walked by the postman was as short as possible. How could the postman ensure that the distance walked was a minimum?
A very close notion is the concept of traversable graph, which is one that can be drawn without taking a pen from the paper and without retracing the same edge. In such a case the graph is said to have an Eulerian trail (yes, from Euler’s bridges problem). An Eulerian trail uses all the edges of a graph. For a graph to be Eulerian all the vertices must be of even order.
An algorithm for finding an optimal Chinese postman route is:
- List all odd vertices.
- List all possible pairings of odd vertices.
- For each pairing find the edges that connect the vertices with the minimum weight.
- Find the pairings such that the sum of the weights is minimised.
- On the original graph add the edges that have been found in Step 4.
- The length of an optimal Chinese postman route is the sum of all the edges added to the total found in Step 4.
- A route corresponding to this minimum weight can then be easily found.
For the first steps, we can use the codes from Hurley & Oldford’s Eulerian tour algorithms for data visualization and the PairViz package. First, we have to load some R packages
require(igraph) require(graph) require(eulerian) require(GA) |
Then use the following function from stackoverflow,
make_eulerian = function(graph){ info = c("broken" = FALSE, "Added" = 0, "Successfull" = TRUE) is.even = function(x){ x %% 2 == 0 } search.for.even.neighbor = !is.even(sum(!is.even(degree(graph)))) for(i in V(graph)){ set.j = NULL uneven.neighbors = !is.even(degree(graph, neighbors(graph,i))) if(!is.even(degree(graph,i))){ if(sum(uneven.neighbors) == 0){ if(sum(!is.even(degree(graph))) > 0){ info["Broken"] = TRUE uneven.candidates <- !is.even(degree(graph, V(graph))) if(sum(uneven.candidates) != 0){ set.j <- V(graph)[uneven.candidates][[1]] }else{ info["Successfull"] <- FALSE } } }else{ set.j <- neighbors(graph, i)[uneven.neighbors][[1]] } }else if(search.for.even.neighbor == TRUE & is.null(set.j)){ info["Added"] <- info["Added"] + 1 set.j <- neighbors(graph, i)[ !uneven.neighbors ][[1]] if(!is.null(set.j)){search.for.even.neighbor <- FALSE} } if(!is.null(set.j)){ if(i != set.j){ graph <- add_edges(graph, edges=c(i, set.j)) info["Added"] <- info["Added"] + 1 } } } (list("graph" = graph, "info" = info))} |
Then, consider some network, with 12 nodes
g1 = graph(c(1,2, 1,3, 2,4, 2,5, 1,5, 3,5, 4,7, 5,7, 5,8, 3,6, 6,8, 6,9, 9,11, 8,11, 8,10, 8,12, 7,10, 10,12, 11,12), directed = FALSE) |
To plot that network, use
V(g1)$name=LETTERS[1:12] V(g1)$color=rgb(0,0,1,.4) ly=layout.kamada.kawai(g1) plot(g1,vertex.color=V(newg)$color,layout=ly) |
Then we convert it to some traversable graph by adding 5 vertices
eulerian = make_eulerian(g1) eulerian$info broken Added Successfull 0 5 1 g = eulerian$graph |
as shown below
ly=layout.kamada.kawai(g) plot(g,vertex.color=V(newg)$color,layout=ly) |
We cut those 5 vertices in two part, and therefore, we add 5 artificial nodes
A=as.matrix(as_adj(g)) A1=as.matrix(as_adj(g1)) newA=lower.tri(A, diag = FALSE)*A1+upper.tri(A, diag = FALSE)*A for(i in 1:sum(newA==2)) newA = cbind(newA,0) for(i in 1:sum(newA==2)) newA = rbind(newA,0) s=nrow(A) for(i in 1:nrow(A)){ Aj=which(newA[i,]==2) if(!is.null(Aj)){ for(j in Aj){ newA[i,s+1]=newA[s+1,i]=1 newA[j,s+1]=newA[s+1,j]=1 newA[i,j]=1 s=s+1 }}} |
We get the following graph, where all nodes have an even number of vertices !
newg=graph_from_adjacency_matrix(newA) newg=as.undirected(newg) V(newg)$name=LETTERS[1:17] V(newg)$color=c(rep(rgb(0,0,1,.4),12),rep(rgb(1,0,0,.4),5)) ly2=ly transl=cbind(c(0,0,0,.2,0),c(.2,-.2,-.2,0,-.2)) for(i in 13:17){ j=which(newA[i,]>0) lc=ly[j,] ly2=rbind(ly2,apply(lc,2,mean)+transl[i-12,]) } plot(newg,layout=ly2) |
Our network is now the following (new nodes are small because actually, they don’t really matter, it’s just for computational reasons)
plot(newg,vertex.color=V(newg)$color,layout=ly2, vertex.size=c(rep(20,12),rep(0,5)), vertex.label.cex=c(rep(1,12),rep(.1,5))) |
Now we can get the optimal path
n <- LETTERS[1:nrow(newA)] g_2 <- new("graphNEL",nodes=n) for(i in 1:nrow(newA)){ for(j in which(newA[i,]>0)){ g_2 <- addEdge(n[i],n[j],g_2,1) }} etour(g_2,weighted=FALSE) [1] "A" "B" "D" "G" "E" "A" "C" "E" "H" "F" "I" "K" "H" "J" "G" "P" "J" "L" "K" "Q" "L" "H" "O" "F" "C" [26] "N" "E" "B" "M" "A" |
or
edg=attr(E(newg), "vnames") ET=etour(g_2,weighted=FALSE) parcours=trajet=rep(NA,length(ET)-1) for(i in 1:length(parcours)){ u=c(ET[i],ET[i+1]) ou=order(u) parcours[i]=paste(u[ou[1]],u[ou[2]],sep="|") trajet[i]=which(edg==parcours[i]) } parcours [1] "A|B" "B|D" "D|G" "E|G" "A|E" "A|C" "C|E" "E|H" "F|H" "F|I" "I|K" "H|K" "H|J" "G|J" "G|P" "J|P" [17] "J|L" "K|L" "K|Q" "L|Q" "H|L" "H|O" "F|O" "C|F" "C|N" "E|N" "B|E" "B|M" "A|M" trajet [1] 1 3 8 9 4 2 6 10 11 12 16 15 14 13 26 27 18 19 28 29 17 25 24 7 22 23 5 21 20 |
Let us try now on a real network of streets. Like Missoula, Montana.
I will not try to get the shapefile of the city, I will just try to replicate the photography above.
If you look carefully, you will see some problem : 10 and 93 have an odd number of vertices (3 here), so one strategy is to connect them (which explains the grey line).
But actually, to be more realistic, we start in 93, and we end in 10. Here is the optimal (shortest) path which goes through all vertices.
Now, we are ready for Halloween, to go through all streets in the neighborhood !
Game of Friendship Paradox
In the introduction of my course next week, I will (briefly) mention networks, and I wanted to provide some illustration of the Friendship Paradox. On network of thrones (discussed in Beveridge and Shan (2016)), there is a dataset with the network of characters in Game of Thrones. The word “friend” might be abusive here, but let’s continue to call connected nodes “friends”. The friendship paradox states that
People on average have fewer friends than their friends
This was discussed in Feld (1991) for instance, or Zuckerman & Jost (2001). Let’s try to see what it means here. First, let us get a copy of the dataset
download.file("https://www.macalester.edu/~abeverid/data/stormofswords.csv","got.csv") GoT=read.csv("got.csv") library(networkD3) simpleNetwork(GoT[,1:2]) |
Because it is difficult for me to incorporate some d3js script in the blog, I will illustrate with a more basic graph,
Consider a vertex v\in V in the undirected graph G=(V,E) (with classical graph notations), and let d(v) denote the number of edges touching it (i.e. v has d(v) friends). The average number of friends of a random person in the graph is \mu = \frac{1}{n_V}\sum_{v\in V} d(v)=\frac{2 n_E}{n_V} The average number of friends that a typical friend has is
\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)But
\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)=\sum_{v,v' \in G} \left(<br />
\frac{d(v')}{d(v)}+\frac{d(v)}{d(v')}\right)=\sum_{v,v' \in G}\left(\frac{d(v')^2+d(v)^2}{d(v)d(v')}\right)=\sum_{v,v' \in G} \left(\frac{(d(v')-d(v))^2}{d(v)d(v')}+2\right){\color{red}{\succ}}\sum_{v,v' \in G} \left(2\right)=\sum_{v\in V} d(v)
Thus,\frac{1}{n_V}\sum_{v\in V} \left(\frac{1}{d(v)}\sum_{v'\in E_v} d(v')\right)\succ \frac{1}{n_V}\sum_{v\in V} d(v)
Note that this can be related to the variance decomposition \text{Var}[X]=\mathbb{E}[X^2]-\mathbb{E}[X]^2i.e.\frac{\mathbb{E}[X^2]}{\mathbb{E}[X]} =\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}\succ\mathbb{E}[X](Jensen inequality). But let us get back to our network. The list of nodes is
M=(rbind(as.matrix(GoT[,1:2]),as.matrix(GoT[,2:1]))) nodes=unique(M[,1]) |
and we each of them, we can get the list of friends, and the number of friends
friends = function(x) as.character(M[which(M[,1]==x),2]) nb_friends = Vectorize(function(x) length(friends(x))) |
as well as the number of friends friends have, and the average number of friends
friends_of_friends = function(y) (Vectorize(function(x) length(friends(x)))(friends(y))) nb_friends_of_friends = Vectorize(function(x) mean(friends_of_friends(x))) |
We can look at the density of the number of friends, for a random node,
Nb = nb_friends(nodes) Nb2 = nb_friends_of_friends(nodes) hist(Nb,breaks=0:40,col=rgb(1,0,0,.2),border="white",probability = TRUE) hist(Nb2,breaks=0:40,col=rgb(0,0,1,.2),border="white",probability = TRUE,add=TRUE) lines(density(Nb),col="red",lwd=2) lines(density(Nb2),col="blue",lwd=2) |
and we can also compute the averages, just to check
mean(Nb) [1] 6.579439 mean(Nb2) [1] 13.94243 |
So, indeed, people on average have fewer friends than their friends.
Classification from scratch, neural nets 6/8
Sixth post of our series on classification from scratch. The latest one was on the lasso regression, which was still based on a logistic regression model, assuming that the variable of interest Y has a Bernoulli distribution. From now on, we will discuss technique that did not originate from those probabilistic models, even if they might still have a probabilistic interpretation. Somehow. Today, we will start with neural nets.
Maybe I should start with a disclaimer. The goal is not to replicate well designed R functions, used for predictive modeling. It is simply to get a basic understanding of what’s going on.
Networs, nodes and edges
First of all, neurals nets are nets, or networks. I will skip the parallel with “neural” stuff because it does not help me understanding what is happening (all apologies for my poor knowledge on biology, and cells)
So, it’s about some network. Networks have nodes, and edges (possibly connected) that connect nodes,
or maybe, to more specific (at least it helped me understanding what’s going on), some sort of flow network,
In such a network, we usually have sources (here multiple) sources (here \color{red}\{s_1,s_2,s_3\}), on the left, on a sink (here \{\color{blue}t\}), on the right. To continue with this metaphorical introduction, information from the sources should reach the sink. An usually, sources are explanatory variables, \{\mathbf{x}_1,\cdots,\mathbf{x}_p\}, and the sink is our variable of interest \mathbf{y}. And we want to create a graph, from the sources to the sink. We will have directed edges, with only one (unique) direction, where we will put weights. It is not a flow, the parallel with flow will stop here. For instance, the most simple network will be the following one, with no layer (i.e no node between the source and the sink)
The output here is a binary variable y\in\{0,1\} (it can also be y\in\{-1,+1\} but here, it’s not a big deal). In our network, our output will be y\in(0,1), because it is more easy to handly. For instance, consider y=f(something), for some function f taking values in (0,1). One can consider the sigmoid functionf(x)=\frac{1}{1+e^{-x}}=\frac{e^{x}}{e^{x}+1}which is actually the logistic function (so we should not be surprised to have results somehow close the logistic regression…). This function f is called the activation function, and there are thousands of such functions. If y\in\{-1,+1\}, people consider the hyperbolic tangentf(x)=\tanh(x)={\frac {(e^{x}-e^{-x})}{(e^{x}+e^{-x})}}or the inverse tangent function
f(x)=\tan ^{-1}(x)And as input for such function, we consider a weighted sum of incoming nodes. So herey_i=f\left(\sum_{j=1}^p\omega_j x_{j,i}\right)We can also add a constant actuallyy_i=f\left(\omega_0+\sum_{j=1}^p\omega_j x_{j,i}\right)So far, we are not far away from the logistic regression. Except that our starting point was a probabilistic model, in the sense that the later was interpreted as a probability (the probability that Y=1) and we wanted the model with the highest likelihood. But we’ll talk about selection of weights later on. First, let us construct our first (very simple) neural network. First, we have the sigmoid function
sigmoid = function(x) 1 / (1 + exp(-x)) |
The consider some weights. In our model with seven explanatory variables, with need 7 weights. Or 8 if we include the constant term. Let us consider \mathbf{\omega}=\mathbf{1},
weights_0 = rep(1,8) X = as.matrix(cbind(1,myocarde[,1:7])) y_5_1 = sigmoid(X %*% weights_0) |
that’s kind of stupid because all our predictions are 1, here. Let us try something else. Like \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. It is optimized, somehow, but we needed something to visualize what’s going on
weights_0 = lm(PRONO~.,data=myocarde)$coefficients |
then use
y_5_1 = sigmoid(X %*% weights_0) |
In order to see if we get a “good” prediction, let use plot the ROC curve, and compare it with the one we got with a (simple) logistic regression
library(ROCR) pred = ROCR::prediction(y_5_1,myocarde$PRONO) perf = ROCR::performance(pred,"tpr", "fpr") plot(perf,col="blue",lwd=2) reg = glm(PRONO~.,data=myocarde,family=binomial(link = "logit")) y_0 = predict(reg,type="response") pred0 = ROCR::prediction(y_0,myocarde$PRONO) perf0 = ROCR::performance(pred0,"tpr", "fpr") plot(perf0,add=TRUE,col="red") |
That’s not bad for a very first attempt. Except that we’ve been cheating here, since we did use \mathbf{\omega}=\widehat{\mathbf{\beta}}^{ols}. How, for real, should we choose those weights?
Using a loss function
Well, if we want an “optimal” set of weights, we need to “optimize” an objective function. So we need to quantify the loss of a mistake, between the prediction, and the observation. Consider here a quadratic loss function
loss = function(weights){ mean( (myocarde$PRONO-sigmoid(X %*% weights))^2) } |
It might be stupid to use a quadratic loss function for a classification, but here, it’s not the point. We just want to understand what is the algorithm we use, and the loss function \ell is just one parameter. Then we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,f(\omega_0+\mathbf{x}_i^T\mathbf{\omega})\right)\right\rbraceThus, consider
weights_1 = optim(weights_0,loss)$par |
(where the starting point is the OLS estimate). Again, to see what’s going on, let us visualize the ROC curve
y_5_2 = sigmoid(X %*% weights_1) pred = ROCR::prediction(y_5_2,myocarde$PRONO) perf = ROCR::performance(pred,"tpr", "fpr") plot(perf,col="blue",lwd=2) plot(perf0,add=TRUE,col="red") |
That’s not amazing, but again, that’s only a first step.
A single layer
Let us add a single layer in our network.
Those nodes are connected to the sources (incoming from sources) from the left, and then connected to the sink, on the right. Those nodes are not inter-connected. And again, for that network, we need edges (i.e series of weights). For instance, on the network above, we did add one single layer, with (only) three nodes.
For such a network, the prediction formula is \mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3\omega_h f_h\left(\omega_{h,0}+ \sum_{j=1}^p \omega_{h,j} x_j\right)\right)or more synthetically\mathbf{y}=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)Usually, we consider the same activation function everywhere. Don’t ask me why, I find that weird.
Now, we have a lot of weights to choose. Let us use again OLS estimates
weights_1 <- lm(PRONO~1+FRCAR+INCAR+INSYS+PAPUL+PVENT,data=myocarde)$coefficients X1 = as.matrix(cbind(1,myocarde[,c("FRCAR","INCAR","INSYS","PAPUL","PVENT")])) weights_2 <- lm(PRONO~1+INSYS+PRDIA,data=myocarde)$coefficients X2=as.matrix(cbind(1,myocarde[,c("INSYS","PRDIA")])) weights_3 <- lm(PRONO~1+PAPUL+PVENT+REPUL,data=myocarde)$coefficients X3=as.matrix(cbind(1,myocarde[,c("PAPUL","PVENT","REPUL")])) |
In that case, we did specify edges, and which sources (explanatory variables) should be used for each additional node. Actually, here, other techniques could be have been used, like using a PCA. Each node will then be one of the components. But we’ll use that idea later on…
X = cbind(sigmoid(X1 %*% weights_1), sigmoid(X2 %*% weights_2), sigmoid(X3 %*% weights_3)) |
But we’re not done here. Those were weights from the source to the know nodes, in the layer. We still need the weights from the nodes to the sink. Here, let use use a simple average
weights = c(1/3,1/3,1/3) y_5_3 <- sigmoid(X %*% weights) |
Again, we can plot the ROC curve to see what we’ve done…
pred = ROCR::prediction(y_5_3,myocarde$PRONO) perf = ROCR::performance(pred,"tpr", "fpr") plot(perf,col="blue",lwd=2) plot(perf0,add=TRUE,col="red") |
On back propagation
Now, we need some optimal selection of those weights. Observe that with only 3 nodes, there are already (7+1)\times3+3=27 parameters in that model! Clearly, parcimony is not the major issue when you start using neural nets! If p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_hf_h\left(\omega_{h,0}+ \mathbf{x}^T\mathbf{\omega}_h\right)\right)we want to solve\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n\ell\left(y_i,p(\mathbf{x}_i)\right)\right\rbracefor some loss function, which is\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i-p(\mathbf{x}_i))^2 \right\rbracefor the quadratic norm, or\mathbf{\omega}^\star=\text{argmin}\left\lbrace\frac{1}{n}\sum_{i=1}^n (y_i\log p(\mathbf{x}_i)+[1-y_i]\log [1-p(\mathbf{x}_i)]) \right\rbraceif we want to use cross-entropy.
For convenience, let us center all the variable we create, otherwise, we get numerical problems.
center = function(z) (z-mean(z))/sd(z) loss = function(weights){ weights_1 = weights[0+(1:7)] weights_2 = weights[7+(1:7)] weights_3 = weights[14+(1:7)] weights_ = weights[21+1:4] X1=X2=X3=as.matrix(myocarde[,1:7]) Z1 = center(X1 %*% weights_1) Z2 = center(X2 %*% weights_2) Z3 = center(X3 %*% weights_3) X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3)) mean( (myocarde$PRONO-sigmoid(X %*% weights_))^2)} |
Now that we have our objective function, consider some starting points. We can consider weights from a PCA, and then use a gradient descent algorithm,
pca = princomp(myocarde[,1:7]) W = get_pca_var(pca)$contrib weights_0 = c(W[,1],W[,2],W[,3],c(-1,rep(1,3)/3)) weights_opt = optim(weights_0,loss)$par |
The prediction is then obtained using
weights_1 = weights_opt[0+(1:7)] weights_2 = weights_opt[7+(1:7)] weights_3 = weights_opt[14+(1:7)] weights_ = weights_opt[21+1:4] X1=X2=X3=as.matrix(myocarde[,1:7]) Z1 = center(X1 %*% weights_1) Z2 = center(X2 %*% weights_2) Z3 = center(X3 %*% weights_3) X = cbind(1,sigmoid(Z1), sigmoid(Z2), sigmoid(Z3)) y_5_4 = sigmoid(X %*% weights_) |
And as previously, why not plot the ROC curve of that model
pred = ROCR::prediction(y_5_4,myocarde$PRONO) perf = ROCR::performance(pred,"tpr", "fpr") plot(perf,col="blue",lwd=2) plot(perf,add=TRUE,col="red") |
That’s not too bad. But with 27 coefficients, that’s what we would expect, no?
Using nnet() function
That’s more or less what is done in neural nets functions. Let us now have a look at some dedicated R functions.
library(nnet) myocarde_minmax = myocarde minmax = function(z) (z-min(z))/(max(z)-min(z)) for(j in 1:7) myocarde_minmax[,j] = minmax(myocarde_minmax[,j]) |
Here, variables are linearly transformed, to take values in (0,1). Then we can construct a neural network with one single layer, and three nodes,
model_nnet = nnet(PRONO~.,data=myocarde_minmax,size=3) summary(model_nnet) a 7-3-1 network with 28 weights options were - b->h1 i1->h1 i2->h1 i3->h1 i4->h1 i5->h1 i6->h1 i7->h1 -9.60 -1.79 21.00 14.72 -20.45 -5.05 14.37 -17.37 b->h2 i1->h2 i2->h2 i3->h2 i4->h2 i5->h2 i6->h2 i7->h2 4.72 2.83 -3.37 -1.64 1.49 2.12 2.31 4.00 b->h3 i1->h3 i2->h3 i3->h3 i4->h3 i5->h3 i6->h3 i7->h3 -0.58 -6.03 25.14 18.03 -1.19 7.52 -19.47 -12.95 b->o h1->o h2->o h3->o -1.32 29.00 -10.32 26.27 |
Here, it is the complete full network. And actually, there are (online) some functions that can he used to visualize that network
library(devtools) source_url('https://gist.githubusercontent.com/fawda123/7471137/raw/466c1474d0a505ff044412703516c34f1a4684a5/nnet_plot_update.r') plot.nnet(model_nnet) |
Nice, isn’t it? We clearly see the intermediary layer, with three nodes, and on top the constants. Edges are the plain lines, the darker, the heavier (in terms of weights).
Using neuralnet()
Other R functions can actually be considered.
library(neuralnet) model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)), myocarde_minmax,hidden=3, act.fct = sigmoid) plot(model_nnet) |
Again, for the same network structure, with one (hidden) layer, and three nodes in it.
Network with multiple layers
The good thing is that it’s not possible to add more layers. Like two layers. Nodes from the first layer are no longuer connected with the sink, but with nodes in the second layer. And those nodes will then be connected to the sink. We now have something like
p(\mathbf{x})=f\left( \omega_0+ \sum_{h=1}^3 \omega_h f_h\left(\omega_{h,0}+ \mathbf{z}_h^T\mathbf{\omega}_h\right)\right)where\mathbf{z}_h=f\left( \omega_{h,0}+ \sum_{j=1}^{k_h} \omega_{h,j} f_{h,j}\left(\omega_{h,j,0}+ \mathbf{x}^T\mathbf{\omega}_{h,j}\right)\right)I may be rambling here (a little bit) but that’s a lot of parameters. Here is the visualization of such a network,
library(neuralnet) model_nnet = neuralnet(formula(glm(PRONO~.,data=myocarde_minmax)), myocarde_minmax,hidden=3, act.fct = sigmoid) plot(model_nnet) |
Application
Let us get back on our simple dataset, with only two covariates.
library(neuralnet) df_minmax =df df_minmax$y=(df_minmax$y=="1")*1 minmax = function(z) (z-min(z))/(max(z)-min(z)) for(j in 1:2) df_minmax[,j] = minmax(df[,j]) X = as.matrix(cbind(1,df_minmax[,1:2])) |
Consider only one layer, with two nodes
model_nnet = neuralnet(formula(lm(y~.,data=df_minmax)), df_minmax,hidden=c(2)) plot(model_nnet) |
Here, we did not specify it, but the activation function is the sigmoid (actually, it is called logistic here)
model_nnet$act.fct function (x) { 1/(1 + exp(-x)) } attr(,"type") [1] "logistic" f=model_nnet$act.fct |
The weights (on the figure) can be obtained using
w0 = model_nnet$weights[[1]][[2]][,1] w1 = model_nnet$weights[[1]][[1]][,1] w2 = model_nnet$weights[[1]][[1]][,2] |
Now, to get our prediction,
we should usep(\mathbf{x})=f\left( \omega_0+ \omega_1 f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_1 f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})\right)which can be obtained using
f(cbind(1,f(X%*%w1),f(X%*%w2))%*%w0) [,1] [1,] 0.7336477343 [2,] 0.7317999050 [3,] 0.7185803540 [4,] 0.7404005280 [5,] 0.7518482779 [6,] 0.4939774149 [7,] 0.4965876378 [8,] 0.7101714888 [9,] 0.5050760026 [10,] 0.5049877644 |
Unfortunately, it is not the output of the model here,
neuralnet::prediction(model_nnet) Data Error: 0; $rep1 x1 x2 y 1 0.1250 0.0000000000 0.02030470787 2 0.0625 0.1176470588 0.89621706711 3 0.9375 0.2352941176 0.01995171956 4 0.0000 0.4705882353 1.10849420363 5 0.5000 0.4705882353 -0.01364966058 6 0.3125 0.5294117647 -0.02409150561 7 0.6875 0.8235294118 0.93743057765 8 0.3750 0.8823529412 1.01320924782 9 1.0000 0.9058823529 1.04805134309 10 0.5625 1.0000000000 1.00377379767 |
If anyone has a clue, I’d be glad to know what went wrong here… I find that odd to have outputs outside the (0,1) interval, but the output is neitherp(\mathbf{x})=\omega_{0,0}+ \omega_{0,1} f(\omega_{1,0}+ \mathbf{x}_h^T\mathbf{\omega}_{1,1:2})+\omega_{0,2} f(\omega_{2,0}+ \mathbf{x}_h^T\mathbf{\omega}_{2,1:2})
cbind(1,f(X%*%w1),f(X%*%w2))%*%w0 [,1] [1,] 1.01320924782 [2,] 1.00377379767 [3,] 0.93743057765 [4,] 1.04805134309 [5,] 1.10849420363 [6,] -0.02409150561 [7,] -0.01364966058 [8,] 0.89621706711 [9,] 0.02030470787 [10,] 0.01995171956 |
Traffic Flow of Kota Kinabalu (with R)
This morning, we had our first practicals on network flows, using an example mentioned in some papers published by Noraini Abdullah and Ting Kien Hua, max flow min cut theorem to minimize traffic congestion in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu. From the roads mentioned in the articles, I did try my best to locate the nodes on a map,
m=matrix(c(0,5.995910, 116.105520,
1,5.992737, 116.093718,
2,5.992066, 116.109883,
3,5.976947, 116.095760,
4,5.985766, 116.091580,
5,5.988940, 116.080112,
6,5.968318, 116.080764,
7,5.977454, 116.075460,
8,5.974226, 116.073604,
9,5.969651, 116.073753,
10,5.972341, 116.069270,
11,5.978818, 116.072880),3,12)
we can be visualized below
library(OpenStreetMap)
map = openmap(c(lat= 6.000, lon= 116.06),
c(lat= 5.960, lon= 116.12))
map=openproj(map)
plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
text(t(m[3:2,]),c("s",1:10,"t"),col="white")
If the source is realistic (up north), I do not feel very confortable with the location of the sink (on the west). But let’s pretend it’s find (to do the maths, at least).
To extract information about edge capacity, on that network use the following code that will extract the three tables from the paper
library(devtools)
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)
with Windows, it seems to be necessary to download another package first
library(devtools)
install_github("ropensci/tabulizerjars")
install_github("ropensci/tabulizer")
library(tabulizer)
location <- 'http://www.jistm.com/PDF/JISTM-2017-04-06-02.pdf'
out <- extract_tables(location)
Now we can get out data frame with capacities
B1=as.data.frame(out[[2]])
B2=as.data.frame(out[[3]])
E=data.frame(from=B1[3:20,"V3"],
to=B1[3:20,"V4"])
E=E[-c(6,8),]
capacity=as.character(B2$V3[-1])
capacity[6]="843"
capacity[4]="2913"
E$capacity=as.numeric(capacity)
We can add those edges on our map (without the arrows to indicate the direction, it would be to heavy to read)
plot(map)
points(t(m[3:2,]),col="black", pch=19, cex=3 )
B=data.frame(i=as.character(c("s",paste("V",1:10,sep=""),"t")),
x=m[3,],y=m[2,])
for(i in 1:nrow(E)){
i1=which(B$i==as.character(E$from[i]))
i2=which(B$i==as.character(E$to[i]))
segments(B[i1,"x"],B[i1,"y"],B[i2,"x"],B[i2,"y"],lwd=3)
}
text(t(m[3:2,]),c("s",1:10,"t"),col="white")
To get the graph with capacities, an alternative is to use
library(igraph)
g=graph_from_data_frame(E)
E(g)$label=E$capacity
plot(g)
but it does not respect geographical locations of nodes. It can actually be done using
plot(g, layout=as.matrix(B[,c("x","y")]))
To get a better understanding of the capacities of the road, use
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$capacity/200)
From that network with capacities, the goal is to determine maximum flow on that network, from the source to the sink. This can be done with R using
> (m=max_flow(graph=g, source="s", target="t"))
$value
[1] 2571
$flow
[1] 1191 1380 1422 1380 231 0 231 0 1149 1422 1149 0 0 1149 1422
[16] 1149
Our maximum flow is here 2571, which is different from was is actually claimed both in the two papers max flow min cut theorem to… and application of the Shortest Path… (“the maximum flow for the capacitated network with 12 nodes and 16 edges of the selected scope in this study was 2598 vehicles per hour“) where there are clearly typos since values in the table and on the graph are different. Here I did use the ones from the tables.
E$flux1=m$flow
E(g)$label=E$flux1
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux1/200)
That is nice, but rather odd. Actually, a much simpler flow can be considered, but the same global value
E$flux2=c(1422,1149,1422,1149,0,0,0,0,
1149,1422,1149,0,0,1149,1422,1149)
E(g)$label=E$flux2
plot(g, layout=as.matrix(B[,c("x","y")]),
edge.width=E$flux2/200)
Nice, isn’t it. It is actually possible to do exactly the same on another paper they have, on the same city, traffic congestion problem of road networks in Kota Kinabalu.
location <- 'http://www.worldresearchlibrary.org/up_proc/pdf/999-150486366625-30.pdf'
out <- extract_tables(location)
dim(out[[3]])
B1=as.data.frame(out[[3]])
E=data.frame(from=B1[2:61,"V2"],
to=B1[2:61,"V3"],
capacity=B1[2:61,"V4"])
E$capacity=as.numeric(
as.character(E$capacity))
library(igraph)
g=graph_from_data_frame(E)
m=max_flow(graph=g,
source="S",
target="T")
E$flux1=m$flow
E(g)$label=E$flux1
plot(g,
edge.width=E$flux1/200,
edge.arrow.size=0.15)
Here the value of the maximal flow is 4017, just as they found in the original paper
Traffic Flow of Kota Kinabalu (Malaysia)
For the second practicals of our course on networks and flows, we will study traffic flow of Kota Kinabalu (Malaysia), following several papers published by Noraini Abdullah and Ting Kien Hua, such as max flow min cut theorem to minimize traffic congestion in Kota Kinabalu, traffic congestion problem of road networks in Kota Kinabalu and application of the Shortest Path and Maximum Flow with Bottleneck in Traffic Flow of Kota Kinabalu.
Networks and Flows #2
After the Traveling Salesman part, we will see tools used to study flows in networks this week. Slides are now online (from slide 42).
Slides are probably full of typos, not to say mistakes. All apologies.
I will also include some slides on the simplex method (since I am not sure that all students have seen it)
Métro: centralité et robustesse
Demain matin, nous aurons un TP pour le cours sur les réseaux, les flux et les transports. En particulier, en nous inspirant des travaux de Sybil Derrible, nous allons commencer par étudier la centralité dans les différents systèmes de métro, mais aussi la robustesse. Les matrices d’adjacence d’une trentaine de métros dans le monde sont en ligne dans un fichier xls. Histoire de gagner un peu de temps, le code pour créer une matrice d’adjacence peut être le suivant
loc="/data/Metro_Networks_Adjacency.xls"
library(xlsx)
E=read.xlsx(loc,"StPetersburg")
n=nrow(E)
nom=as.character(E[3:(n-2),1])
Adj=E[3:(n-2),(4:ncol(E)-1)]
Adj[is.na(Adj)]=0
Adj=as.matrix(Adj)
colnames(Adj)=rownames(Adj)=nom
On est ensuite prêt à manipuler le réseau,
library(igraph)
iflo=graph_from_adjacency_matrix(Adj,mode = "undirected")
plot(iflo)
On va utiliser les notions vues en cours, sur la centralité, mais surtout, on travaillera sur Quantifying the robustness of metro networks, inspiré de The complexity and robustness of metro networks. Plusieurs fonctions utiles sont déjà programmées dans R, comme l’assortativité.
Networks with R
In order to practice with network data with R, we have been playing with the Padgett (1994) Florentine’s wedding dataset (discussed in the lecture). The dataset is available from
> library(network)
> data(flo)
> nflo=network(flo,directed=FALSE)
> plot(nflo, displaylabels = TRUE,
+ boxed.labels =
+ FALSE)
The next step was to move from the network package to igraph. Since we have the adjacency matrix, we can use it
> library(igraph)
> iflo=graph_from_adjacency_matrix(flo,
+ mode = "undirected")
> plot(iflo)
The good thing is that a lot of functions are available, for instance we can get shortest paths, between two specific nodes. And we can give appropriate colors to the nodes that we’ll cross
> AP=all_shortest_paths(iflo,
+ from="Peruzzi",
+ to="Ginori")
> L=AP$res[[1]]
> V(iflo)$color="yellow"
> V(iflo)$color[L[2:4]]="light blue"
> V(iflo)$color[L[c(1,5)]]="blue"
> plot(iflo)
We can also visualize edges, but I found it slightly more complicated (to extract edges from the output)
> liens=c(paste(as.character(L)[1:4],
+ "--",
+ as.character(L)[2:5],sep=""),
+ paste(as.character(L)[2:5],
+ "--",
+ as.character(L)[1:4],sep=""))
> df=as.data.frame(ends(iflo,E(iflo)))
> names(df)=c("src","target")
> lstn=sort(unique(c(as.character(df[,1]),as.character(df[,2]),"Pucci")))
> Eliens=paste(as.numeric(factor(df[,1],levels=lstn)),"--",
+ as.numeric(factor(df[,2],levels=lstn)),sep="")
> EU=unlist(lapply(Eliens,function(x) x%in%liens))
> E(iflo)$color=c("grey","black")[1+EU]
> plot(iflo)
But it works. It is also possible to use some D3js visualization
> library( networkD3 )
> simpleNetwork (df)
Then the next question was to add a vertice to the network. The most simple way to do it is probability through the adjacency matrix
> flo2=flo
> flo2["Pucci","Bischeri"]=1
> flo2["Bischeri","Pucci"]=1
> nflo2=network(flo2,directed=FALSE)
> plot(nflo2, displaylabels = TRUE,
+ boxed.labels =
+ FALSE)
Then, we’ve been playing with centrality measures.
> plot(iflo,vertex.size=betweenness(iflo))
The goal was to see how related they were. Here, for all of them, “Medici” is the central node. But what about the others?
> B=betweenness(iflo)
> C=closeness(iflo)
> D=degree(iflo)
> E=eigen_centrality(iflo)$vector
> base=data.frame(betw=B,close=C,deg=D,eig=E)
> cor(base)
betw close deg eig
betw 1.0000000 0.5763487 0.8333763 0.6737162
close 0.5763487 1.0000000 0.7572778 0.7989789
deg 0.8333763 0.7572778 1.0000000 0.9404647
eig 0.6737162 0.7989789 0.9404647 1.0000000
Those measures are quite correlated. It is also possible to use a hierarchical graph to visualize how close those centrality measures can be
> H=hclust(dist(t(base)),
+ method="ward")
> plot(H)
Instead of looking at values of centrality measures, it is possible to looks are ranks
> rbase=base
> for(i in 1:4) rbase[,i]=rank(base[,i])
> H=hclust(dist(t(rbase)),
+ method="ward")
> plot(H)
Here the eigenvector measure is very close to the degree of vertices.
Finally, it is possible to seek clusters (in the context of coalition here, in case a war should start between those families)
> kc <- fastgreedy.community ( iflo )
Here we have 3 classes (+1 for the node that is disconnected from the other families)
> V(iflo)$color=c("yellow","orange",
+ "light blue")[membership ( kc )]
> plot(iflo)
> plot(kc,iflo)