All posts by Arthur Charpentier

Arthur Charpentier, professor in Montréal, in Actuarial Science. Former professor-assistant at ENSAE Paristech, associate professor at Ecole Polytechnique and assistant professor in Economics at Université de Rennes 1.  Graduated from ENSAE, Master in Mathematical Economics (Paris Dauphine), PhD in Mathematics (KU Leuven), and Fellow of the French Institute of Actuaries.

Je code, donc je suis

Mercredi 21 novembre, l’édition 2018 du Colloque HumanIA se tiendra à l’Agora Hydro-Québec du Complexe des sciences Pierre-Dansereau de l’UQAM dès 9h30. Dans le cadre de la semaine La France à l’UQAM, le Colloque sera suivi d’un débat sur le thème «Intelligence artificielle : l’erreur n’est-elle qu’humaine ?», dans l’après midi. J’interviendrais pour ma part dans un atelier en avant midi, sur le theme, “je code, donc je suis“. Je mets quelques liens pour alimenter la discussion,

Rencontres Mutualistes

Lundi et mardi, je serais a Beaune, en Bourgogne, pour les premières rencontres mutualistes. On m’a demande d’intervenir en ouverture de la seconde journée, sur le thème “segmentation et mutualisation”.

Les slides sont dès à  présent en ligne. Comme j’ai peu de temps, je reviendrais sur les grands principes de la tarification et du rôle de l’actuaire. J’ai ensuite pense qu’une discussion autour du graphique suivant pourrait être intéressante, en particulier sur les deux bornes, inférieure (‘average pricing‘) et supérieure (‘perfect pricing‘)

On finira avec un rapide retour sur les pricing games, pour conclure.

The “probability to win” is hard to estimate…

Real-time computation (or estimation) of the “probability to win” is difficult. We’ve seem that in soccer games, in elections… but actually, as a professor, I see that frequently when I grade my students.

Consider a classical multiple choice exam. After each question, imagine that you try to compute the probability that the student will pass. Consider here the case where we have 50 questions. Students pass when they have 25 correct answers, or more. Just for simulations, I will assume that students just flip a coin at each question… I have n students, and 50 questions

set.seed(1)
n=10
M=matrix(sample(0:1,size=n*50,replace=TRUE),50,n)

Let X_{i,j} denote the score of student i at question j. Let S_{i,j} denote the cumulated score, i.e. S_{i,j}=X_{i,1}+\cdots+X_{i,j}. At step j, I can get some sort of prediction of the final score, using \hat{T}_{i,j}=50\times S_{i,j}/j. Here is the code

SM=apply(M,2,cumsum)
NB=SM*50/(1:50)

We can actually plot it

plot(NB[,1],type="s",ylim=c(0,50))
abline(h=25,col="blue")
for(i in 2:n) lines(NB[,i],type="s",col="light blue")
lines(NB[,3],type="s",col="red")


But that’s simply the prediction of the final score, at each step. That’s not the computation of the probability to pass !

Let’s try to see how we can do it… If after j questions, the students has 25 correct answer, the probability should be 1 – i.e. if S_{i,j}\geq 25 – since he cannot fail. Another simple case is the following : if after j questions, the number of points he can get with all correct answers until the end is not sufficient, he will fail. That means if S_{i,j}+(50-i+1)< 25 the probability should be 0. Otherwise, to compute the probability to sucess, it is quite straightforward. It is the probability to obtain at least 25-S_{i,j} correct answers, out of 50-j questions, when the probability of success is actually S_{i,j}/j. We recognize the survival probability of a binomial distribution. The code is then simply

PB=NB*NA
for(i in 1:50){
  for(j in 1:n){
    if(SM[i,j]&gt;=25) PB[i,j]=1
    if(SM[i,j]+(50-i+1)&lt;25)   PB[i,j]=0
    if((SM[i,j]&lt;25)&amp;(SM[i,j]+(50-i+1)&gt;=25)) PB[i,j]=1-pbinom(25-SM[i,j],size=(50-i),prob=SM[i,j]/i)
  }}

So if we plot it, we get

plot(PB[,1],type="s",ylim=c(0,1))
abline(h=25,col="red")
for(i in 2:n) lines(PB[,i],type="s",col="light blue")
lines(PB[,3],type="s",col="red")

which is much more volatile than the previous curves we obtained ! So yes, computing the “probability to win” is a complicated exercice ! Don’t blame those who try to find it hard to do !

Of course, things are slightly different if my students don’t flip a coin… this is what we obtain if half of the students are good (2/3 probability to get a question correct) and half is not good (1/3 chance),

If we look at the probability to pass, we usually do not have to wait until the end (the 50 questions) to know who passed and who failed

PS : I guess a less volatile solution can be obtained with a Bayesian approach… if I find some spare time this week, I will try to code it…

Big Data and Artificial Intelligence

New week, I will be in France for a few days. On Monday and Tuesday, I will be in Beaune, in Burgundy, at the first “Rencontres Mutualistes” (I will upload the slides of my talk soon). And on Wednesday, I will be in Paris, at ESCP Europe Business School. I will be giving a two hour lecture on “Big Data and Artificial Intelligence”, to use some buzzwords, as asked. More honestly, it will be on (new) data and (new) algorithms for predictive modeling. Slides are now online.

Mapping cities

a French version of this article is online at http://variances.eu/

Issue 53 of Insee Analyses Ile-de-France provides an analysis of “a social mosaic specific to Paris“, with the map in Figure 1.

Figure 1 : INSEE, Insee Analyses 53, 2017

This map is a priori familiar to many people, in the sense that we quickly recognize the city represented, we know how to quickly find different elements, and we know how to read the information presented, almost instinctively. In urban history, the way we saw, and how we represented the maps, has often been the basis of urban planning. Changing representation has made it possible to change the structure of cities. We will take up here the two major historical turning points, mentioned in Söderström (1996), based on two recent works: the representation of Rome at the beginning of the Renaissance, and the first iconographic plans, described in Maier (2015), and the “social” or “health” maps of London of Victorian civil servants, described in Vaughan (2018). In particular, the latter are the ancestors of zoning maps, which are widely used in urban planning, but also correspond to the majority of maps produced by statisticians and economists (the INSEE map is an example). And some maps from the last century have nothing to envy to the maps produced today, in the era of big data.

Rome, Leon Battista Alberti and Leonardo Bufalini, and the unchanging motives

Choay (1980) emphasizes the fundamental role in the history of urban planning of Alberti’s De Re Aedificatoria (presented in manuscript form to Pope Nicholas V in 1452, but published only in 1485). The Alberti Treaty is indeed the first text to consider construction (Alberti prefers the term “construction” – ædificatoria – to cover both architecture and urban planning) in terms of an autonomous domain to which the rational method must be applied. The history of representation sees a turning point with the Renaissance, with figurative forms to represent urban space. We will leave the medieval aesthetic with the rediscovery of perspective, which will produce a rationalization of what can be seen, even if it often induces a partial vision of the object. In his treatise, Leon Battista Alberti proposes a scientific method governing the art of building the house, but also the entire city. But it is in Descriptio urbis Romae, probably written at the same time, that he deepened the idea of urban planning, taking the particular example of Rome.

In his book, Alberti does not propose any map of Rome, but a list of instructions to be followed to create one, with the coordinate tables of several important elements of the city, natural, but also artificial. The list includes the ramparts, the river (the Tiber), the city gates, more than thirty public buildings, including the Capitol, which for Alberti is the reference point of the urban plan. He proposed to represent the city by using a disc divided into 48 portions, and by using the distance to the Capitol (in addition to a compass) to place any building. All calculations are detailed in Ludi Matematici Descriptio, using triangulation techniques. In 1450, Alberti invented the geometric plan, corresponding to what we would today call the plan of a city, even if the circular shape may surprise at first sight (see Figure 2), and does not correspond to the ichnographic plan that we all use today (obtained by horizontal and geometrical projection on a plan).

Figure 2 : reconstruction of Alberti’s map in Descriptio urbis Romae, by Luigi Vagnetti in Lo studio di Roma negli scritti albertiani (1974). Source: Maier (2015).

Its plan corresponds to the emergence of a new mode of representation, very geometric. But it was not until Leonardo Bufalini’s plan in 1531 that the first ichnographic plan arrived (it would be unfair to forget Imola’s plan drawn in 1503 by Leonardo da Vinci). If Alberti’s plan indicated the coordinates of a building, Bufalini decided to incorporate the ground plan of the buildings into his city plan.

Figure 3 : carte de Bufalini, Roma, 1551, British Library Londres. Source : Maier (2015).

But if Alberti’s plan has had such an impact, it is also because it came at the time when Pope Nicholas V launched a plan to rebuild Rome, covering an entire district, from Castel Sant’Angelo to the Vatican. This is probably the first urban planning on this scale, proposing to use the urban form as an instrument of social engineering. Alberti’s representation helped this project, with a scientific vision of the map, no longer depending on the artist’s artistic skills, or to inscribe the map in a story that would give it meaning. This urban map is self-sufficient, containing the terms of its own meaning. In Latour’s (1989) terminology, these representations that can be detached from the place (or object) they represent, “while remaining immutable so that they can be moved in all directions without further distortion, loss or corruption” correspond to immutable motives. Alberti’s map is one of the first examples of these immutable mobiles. It juxtaposes the natural and the human construction, the profane and the sacred, placing measurement and position as the only values.

These plans see the urban space as a whole, not offering a single point of view, such as Jacopo Filippo Foresti’s more classic maps (for the time), for example (see Figure 4). It is possible to take Foresti’s point of view to see his map. Alberti’s map exists only as an abstract object.

Figure 4 : view of Rome by Jacopo Filippo Foresti, 1490. Source: Maier (2015).

If Leonardo Bufalini’s map revolutionized urban mapping, and if the iconographic plan is the dominant representation today, these maps have long remained marginal, because they were exclusively reserved for administrative, military or administrative purposes. The map of Foresti has not completely disappeared: it can be found in tourist maps, for example, which are not very concerned about proportions, simply seeking to stage monuments or to indicate itineraries. We then contrast an often local, horizontal vision (on a human scale) with a vision sometimes called zenithal which proposes to conceive objects in abstract terms. It is the latter that makes it possible to represent the city in the form of different neighbourhoods, with different levels of wealth for example, resulting in geometric plans for social statistics in Victorian times, making it possible to be the subject of census, measurement and comparison.

Also noteworthy is the 1748 map of Rome created by Giambattista Nolli. Previously, Leonardo Bufalini proposed to take the point of view of an eagle, flying over the city. Nolli established the now common practice of representing entire cities from above without a single focal point, each block being considered as if the cartographer were directly above it.

Figure 5 : Giambattista Nolli’s map of Rome, 1748. Source: Sylvain Mottet.

London, Thomas More and Charles Booth, and the zoning maps

At the end of the 19th century (from 1870 onwards) Germany saw the first “social maps”, born in the context of an increasingly dense urban population, high social tensions and deteriorating health conditions. German planners proposed a vision of the innovative city as a living organism that needed to be made to function more efficiently. In 1876, Reinhard Baumeister in Stadterweiterungen in technischer, baupolizeilicher und wirtschaftlicher Beziehung and especially Josef Stübben in Der Städtebau, in 1890, proposed the first urban planning manuals. Thus, towards the end of the first chapter, Baumeister proposes to use an urban expansion plan, a master plan to organize the future urban space. For him, it was a question of ensuring the stability and proper functioning of a city designed as a living organism to deal with the problems it faces: overcrowding in certain districts, traffic and hygiene problems, social unrest, etc. To do this, he suggests specializing the city’s sectors in functional and social terms – what we will later call a “zoning plan” (or Bauzonenplan) – and ensuring the sustainability of this specialization. However, he warns against an overly rigid and inflexible master plan: urban development cannot be planned with too much precision, and it is therefore counterproductive to want to freeze it in a totally predetermined framework. Its plan aims to provide general guidelines necessary for the cohesion of the urban organization. In particular, he notes that the more guidelines there are, the more they will have to be the subject of local plans with a limited time horizon.

While the zoning plan was not originally conceived as part of the management plan, it quickly became the key document, its clearest and most effective part. The objective was to understand, at a glance, the whole city as part of an administrative project. It is not only a question of having an overall vision of the city (which the iconographic plan already allowed) but also of using colour codes that facilitate the total regulation of this city. In particular, this zoning plan made it possible to predict several years or even decades in advance what the morphological and functional characteristics of a given area would be. In particular, it allowed investors to anticipate the future of an area and guarantee a certain return on their investments.

This vision proposed by Baumeister thus made it possible to see better, for example, that the most bourgeois areas were often located in the west of the cities. This position is simply because these areas are often healthier from a health point of view: the smoke and smog produced by cities are dispersed in the upper layers of the atmosphere, and when the wind comes from the west (which happens most often in most European cities) the smoke and smog are transported eastwards and towards the lower layers of the atmosphere. From this observation, it becomes natural to build factories in the east and houses in the west. Baumeister’s work was not only theoretical: he worked on the development of the city of Frankfurt in 1891, then Berlin, Cologne, Essen, etc. In Frankfurt, he thus proposed the idea of concentric zones, which was later taken up by many economists. Figure 6 shows this form of a city, in an article published in 1925 by Ernest Burgess (who would later become one of the founders of the Chicago school). At the beginning of the First World War, all German cities had a zoning plan. And in the following years, it was the United States that adopted the concept, with New York in 1916, and more than 500 cities in 1926. In that year, zoning was officially institutionalized, with the approval of the Supreme Court. In 1933, it was the Athens Charter that recognized zoning as the main and central task of urban planning.

Figure 6 : the concentric city, Burgess (1925). Source : Vaugha (2018)

But in parallel with German development, where civil servants imagine the instruments of contemporary urban planning, social planning in England takes place in a context of strong social tensions. The impoverishment of a large part of the population, the many very precarious housing units, the disastrous sanitary conditions and the increase in crime in large cities have made urban development management an extremely sensitive and political subject. It is not surprising to see Patrick Geddes’ work published in Edinburgh, a biologist by training (the city is seen as a living organism) and an anarchist activist, he thought of the image and cartography as a central tool in the fight against poverty. He developed and advocated the use of statistics and mapping in land use planning and urban development, probably more than anyone else at that time. But history will remember Charles Booth’s work in London from 1886 onwards.

Charles Booth, who began as a merchant and shipowner, devoted himself fully to the first social surveys at the end of the 19th century, based on a precise taxonomy of social categories. He was the first to produce social maps covering the entire urban space. His investigations focused first on the East End, London’s most deprived neighbourhood, before spreading throughout the city over more than 17 years. Its objective was to provide a scientific study of the living conditions of the London population in order to put an end to the images of deprived neighbourhoods. As he said in 1902, his objective was to establish “the numerical relation which poverty, misery and depravity bear to regular earnings and comparative comfort, and to describe the general conditions under which each class lives”.

Booth’s approach was based on the creation of a statistical classification of social categories, ranging from A (the lower class) to H (the upper middle class). It has therefore created, on the basis of the notes taken in the field by the inspectors, a taxonomy that distinguishes the different sectors of the social spectrum. He estimated the number of “poor” (classes A-D) at 300,000 people in the East End and 1,300,000 for the city as a whole, almost a third of the total population at the time. The impact of the figures on the public was enormous and was reinforced by the poverty maps that were included in the results volumes dealing first with the East End and then, a few years later, with the entire city, as illustrated in Figure 7.

Figure 7 : Charles Booth Map Descriptive of London Poverty, in 1898. Source : Vaughan (2018). See also https://booth.lse.ac.uk/map/

The map makes it possible to move from a social logic to a spatial logic: a particular class is translated into cartographic terms, becomes a building, a block of houses, a street, an entire urban area. The social map therefore made it possible to think of the city in terms of homogeneous spatial units. This reasoning is essential for urban planning: it could not develop in the context of the complexity of the discourse, distinguishing between the different inhabitants of the same building. This social vision of mapping, with its focus on slums and poor neighbourhoods, should be brought closer to a health objective.

That said, thinking about urban development in terms of health interventions to heal society from its ills is not new. In 1516, Thomas More founded one of the main forms of urban planning theory, starting with a diagnosis of the disease and then proposing a definitive solution through a total restructuring of the urban form. During the 18th century, the translation of this principle consisted in isolating particular intervention areas (characterized by their insalubrity) and removing them, sweeping away the urban past. The solution adopted at the end of the 19th century was rather to work from what already existed, and to find the most effective solutions to manage the probable future changes in the urban context.

At the end of the 19th century, we also moved from “descriptive statistics” to “prescriptive statistics”, to use Ogien’s terms (2013). We no longer simply evaluate the number of smallpox patients, we begin to make the choice to vaccinate (or not) a specific population, and therefore to set up a mandatory preventive intervention (at the time the vaccine still killed about 1 person out of 300).

The homme moyen (average man) by Adolphe Quetelet will launch moral statistics, with the search for people becoming the norm, the average. Diseases are also beginning to be linked to population density, poor ventilation and humidity. “Dirty, unhealthy, infectious, corrupt or simply stinking are the categories that make it possible to think what we now call pollution” in the words of Fureix and Jarrige (2015). We then move from the social map to the “moral map”, a city thought up by hygienists. Moral geography, which until then had been the subject of partial and unsystematized observations, finds in the map a (graphical) space that synthesizes and organizes it. The social map gave the globalizing vision necessary for the existence of urban planning, and for the precise location of the sites necessary for the targeted and rational functioning of its therapeutic action. In mind is Dr. John Snow’s 1854 map of the cholera epidemic, presented (and updated) in Figure 8. At the time, the dominant theory was the theory of miasmas, claiming that diseases such as plague or cholera spread in the form of bad air. In 1854, with the help of the Reverend Henry Whitehead, by interviewing local residents, he established the geographical distribution of cases, and identified the source of the epidemic: a public water pump on Broad Street. While microbial research has not scientifically established the danger of the water pump, the mapping study of the spread of the epidemic has been sufficient to convince the authorities to close it.

Figure 8 : John Snow On the Mode of Communication of Cholera, in 1855. Source :  https://tabsoft.co/2y82nbf

However, as Vaughan (2018) points out, similar works can be found throughout England at the same time, such as Edwin Chadwick’s Sanitary Map of the Town of Leeds, shown in Figure 9. On this map, Chadwick identifies two groups of dwellings: working class houses and shops, workhouses and artisans’ houses. Colour dots, indicating contagious diseases, only seem to proliferate in poor neighbourhoods. In particular, the map noted that the patients did not live in contiguous areas, but that they are scattered around the map, while being located in poor neighbourhoods.

Figure 9 : Edwin Chadwick, Sanitary Map of the Town of Leeds, 1842. Source : Vaughan (2018) et https://bit.ly/2zL3pM8

The maps had considerable public health impacts, and the zoning, formalized by Charles Booth, was the basis for spatial statistics, as it developed throughout the 20th century.

If the cartography of the city is now complex and rich, it should be noted that economists have taken a long time to leave the “linear city” model, introduced in Hotelling (1929), which has been refined over time, as shown in Figure 10, pitting the residential part (RD – residential district) against the business centre (BD – business district). But that’s another story….

Figure 10 : the different forms of the linear city. Source : Fujita & Thisse (1997).

References:

Booth, Charles (1902) Life and Labour in London. 17 volumes.

Burgess, Ernest (1925). The Growth of the City:An Introduction to a  Research Project.

Choay, Françoise (1980). La règle et le modèle, Paris, Seuil.

Fujita, Masahisa et Thisse, Jacques-Francois. (1997), Économie géographique, Problèmes anciens et nouvelles perspectives. Annales d’Économie et de Statistique, 45, 37-87.

Fureix, Emmanuel et Jarrige, François. (2015), La modernité désenchantée : relire l’histoire du XIXe siècle français, Paris, La Découverte.

Hotelling, Harold (1929). Stability in Competition. The Economic Journal, 39, 41-57.

Latour, Bruno (1989). La science en action. Paris, La Découverte.

Maier, Jessica (2015). Rome, measured and imagined. The University of Chicago Press.

Ogien, Albert (2013). Désacraliser le chiffre dans l’évaluation du secteur public, Versailles, Éditions Quæ,

Söderström, Ola (1996) Paper cities : visual thinking in urban planning. Ecumene, 3, 249-281.

Vaughan, Laura (2018) Mapping Society: The Spatial Dimensions of Social Cartography. UCL Press.

Variable explicative dans un intervalle

Suite a une question posée ce matin en cours, je vais faire un rapide billet pour expliquer comment extraire les valeurs inférieures et supérieures quand on a des intervalles, sous R. Commençons par générer des données,

n=200
set.seed(123)
X=rnorm(n)
Y=2+X+rnorm(n,sd = .3)

Supposons maintenant que l’on n’observe plus la vraie variable x mais juste une classe (on va créer huit classes, avec chacune un huitième des observations)

Q=quantile(x = X,(0:8)/8)
Q[1]=Q[1]-.00001
Xcut=cut(X,breaks = Q)
B=data.frame(Y=Y,X=Xcut)

Par exemple, pour la premiere valeur, on a

as.character(Xcut[1])
[1] "(-0.626,-0.348]"

Pour extraire des informations sur ces bornes, on peut utiliser le petit code suivant, qui renvoie la borne inférieure, la borne supérieur, et le milieu de l’intervalle

extraire = function(x){
  ax=as.character(x)
  lower1 = as.numeric( sub("\\((.+),.*", "\\1", ax) )
  lower2 = as.numeric( sub("\\[(.+),.*", "\\1", ax) )
  upper1 = as.numeric( sub("[^,]*,([^]]*)\\]", "\\1", ax) )
  upper2 = as.numeric( sub("[^,]*,([^]]*)\\)", "\\1", ax) )
  lower = c(lower1,lower2)
  lower=lower[!is.na(lower)]
  upper = c(upper1,upper2)
  upper=upper[!is.na(upper)]
  mid   = (lower+upper)/2
  return(c(lower=lower,mid=mid,upper=upper))
}

On peut vérifier sur notre première observation

extraire(Xcut[1])
 lower    mid  upper 
-0.626 -0.487 -0.348

Juste pour voir, on peut créer trois variables supplémentaires dans notre base (avec ces trois informations)

B2=Vectorize(function(i) extraire(Xcut[i]))(1:n)
B$lower=B2[1,]
B$mid  =B2[2,]
B$upper=B2[3,]

et on peut comparer 4 régressions (i) on régresse sur nos 8 classes, i.e. nos 8 facteurs (ii) on régresse sur la borne inférieure de l’intervalle, (iii) sur la valeur “moyenne” de l’intervalle (iv) sur la borne supérieure

regF=lm(Y~X,data=B)
regL=lm(Y~lower,data=B)
regM=lm(Y~mid,data=B)
regU=lm(Y~upper,data=B)

On peut comparer les prévisions avec nos quatre modeles

plot(B$Y,predict(regF),ylim=c(0,4))
points(B$Y,predict(regM),col="red")
points(B$Y,predict(regU),col="blue")
points(B$Y,predict(regL),col="purple")
abline(a=0,b=1,lty=2)

Pour aller plus loin, on peut aussi comparer les AIC de nos modèles,

AIC(regF)
[1] 204.5653
AIC(regM)
[1] 201.1201
AIC(regL)
[1] 266.5246
AIC(regU)
[1] 255.0687

Si l’utilisation des bornes inférieures et supérieures n’est pas concluante, ici, on notera qu’utiliser la valeur moyenne de l’intervalle donne des résultats un peu meilleurs que d’utiliser 8 facteurs.

Solving the chinese postman problem

Some pre-Halloween post today. It started actually while I was in Barcelona : kids wanted to go back to some store we’ve seen the first day, in the gothic part, and I could not remember where it was. And I said to myself that would be quite long to do all the street of the neighborhood. And I discovered that it was actually an old problem. In 1962, Meigu Guan was interested in a postman delivering mail to a number of streets such that the total distance walked by the postman was as short as possible. How could the postman ensure that the distance walked was a minimum?

A very close notion is the concept of traversable graph, which is one that can be drawn without taking a pen from the paper and without retracing the same edge. In such a case the graph is said to have an Eulerian trail (yes, from Euler’s bridges problem). An Eulerian trail uses all the edges of a graph. For a graph to be Eulerian all the vertices must be of even order.

An algorithm for finding an optimal Chinese postman route is:

  1. List all odd vertices.
  2. List all possible pairings of odd vertices.
  3. For each pairing find the edges that connect the vertices with the minimum weight.
  4. Find the pairings such that the sum of the weights is minimised.
  5. On the original graph add the edges that have been found in Step 4.
  6. The length of an optimal Chinese postman route is the sum of all the edges added to the total found in Step 4.
  7. A route corresponding to this minimum weight can then be easily found.

For the first steps, we can use the codes from Hurley & Oldford’s Eulerian tour algorithms for data visualization and the PairViz package. First, we have to load some R packages

require(igraph)
require(graph)
require(eulerian)
require(GA)

Then use the following function from stackoverflow,

make_eulerian = function(graph){
  info = c("broken" = FALSE, "Added" = 0, "Successfull" = TRUE)
  is.even = function(x){ x %% 2 == 0 }
  search.for.even.neighbor = !is.even(sum(!is.even(degree(graph))))
  for(i in V(graph)){
    set.j = NULL
    uneven.neighbors = !is.even(degree(graph, neighbors(graph,i))) 
if(!is.even(degree(graph,i))){ 
if(sum(uneven.neighbors) == 0){ 
if(sum(!is.even(degree(graph))) &gt; 0){
          info["Broken"] = TRUE
          uneven.candidates &lt;- !is.even(degree(graph, V(graph)))
          if(sum(uneven.candidates) != 0){
            set.j &lt;- V(graph)[uneven.candidates][[1]]
          }else{
            info["Successfull"] &lt;- FALSE
          }
        }       
      }else{
        set.j &lt;- neighbors(graph, i)[uneven.neighbors][[1]]
      }
    }else if(search.for.even.neighbor == TRUE &amp; is.null(set.j)){
      info["Added"] &lt;- info["Added"] + 1     
      set.j &lt;- neighbors(graph, i)[ !uneven.neighbors ][[1]]
      if(!is.null(set.j)){search.for.even.neighbor &lt;- FALSE}
    }
    if(!is.null(set.j)){
      if(i != set.j){
        graph &lt;- add_edges(graph, edges=c(i, set.j))
        info["Added"] &lt;- info["Added"] + 1
      }
    }
  }
  (list("graph" = graph, "info" = info))}

Then, consider some network, with 12 nodes

g1 = graph(c(1,2, 1,3, 2,4, 2,5, 1,5, 3,5, 
4,7, 5,7, 5,8, 3,6, 6,8, 6,9, 9,11, 8,11, 
8,10, 8,12, 7,10, 10,12, 11,12), directed = FALSE)

To plot that network, use

V(g1)$name=LETTERS[1:12]
V(g1)$color=rgb(0,0,1,.4)
ly=layout.kamada.kawai(g1)
plot(g1,vertex.color=V(newg)$color,layout=ly)

Then we convert it to some traversable graph by adding 5 vertices

eulerian = make_eulerian(g1)
eulerian$info
     broken       Added Successfull 
          0           5           1 
g = eulerian$graph

as shown below

ly=layout.kamada.kawai(g)
plot(g,vertex.color=V(newg)$color,layout=ly)

We cut those 5 vertices in two part, and therefore, we add 5 artificial nodes

A=as.matrix(as_adj(g))
A1=as.matrix(as_adj(g1))
newA=lower.tri(A, diag = FALSE)*A1+upper.tri(A, diag = FALSE)*A
for(i in 1:sum(newA==2)) newA = cbind(newA,0)
for(i in 1:sum(newA==2)) newA = rbind(newA,0)
s=nrow(A)
for(i in 1:nrow(A)){
  Aj=which(newA[i,]==2)
  if(!is.null(Aj)){
      for(j in Aj){
        newA[i,s+1]=newA[s+1,i]=1
        newA[j,s+1]=newA[s+1,j]=1
        newA[i,j]=1
        s=s+1
      }}}

We get the following graph, where all nodes have an even number of vertices !

newg=graph_from_adjacency_matrix(newA)
newg=as.undirected(newg)
V(newg)$name=LETTERS[1:17]
V(newg)$color=c(rep(rgb(0,0,1,.4),12),rep(rgb(1,0,0,.4),5))
ly2=ly
transl=cbind(c(0,0,0,.2,0),c(.2,-.2,-.2,0,-.2))
for(i in 13:17){
  j=which(newA[i,]&gt;0)
  lc=ly[j,]
  ly2=rbind(ly2,apply(lc,2,mean)+transl[i-12,])
}
plot(newg,layout=ly2)

Our network is now the following (new nodes are small because actually, they don’t really matter, it’s just for computational reasons)

plot(newg,vertex.color=V(newg)$color,layout=ly2,
     vertex.size=c(rep(20,12),rep(0,5)),
     vertex.label.cex=c(rep(1,12),rep(.1,5)))

Now we can get the optimal path

n &lt;- LETTERS[1:nrow(newA)]
g_2 &lt;- new("graphNEL",nodes=n) for(i in 1:nrow(newA)){ for(j in which(newA[i,]&gt;0)){
    g_2 &lt;- addEdge(n[i],n[j],g_2,1) 
  }}
etour(g_2,weighted=FALSE)
 [1] "A" "B" "D" "G" "E" "A" "C" "E" "H" "F" "I" "K" "H" "J" "G" "P" "J" "L" "K" "Q" "L" "H" "O" "F" "C"
[26] "N" "E" "B" "M" "A"

or

edg=attr(E(newg), "vnames")
ET=etour(g_2,weighted=FALSE)
parcours=trajet=rep(NA,length(ET)-1)
for(i in 1:length(parcours)){
  u=c(ET[i],ET[i+1])
  ou=order(u)
  parcours[i]=paste(u[ou[1]],u[ou[2]],sep="|")
  trajet[i]=which(edg==parcours[i])
}
parcours
 [1] "A|B" "B|D" "D|G" "E|G" "A|E" "A|C" "C|E" "E|H" "F|H" "F|I" "I|K" "H|K" "H|J" "G|J" "G|P" "J|P"
[17] "J|L" "K|L" "K|Q" "L|Q" "H|L" "H|O" "F|O" "C|F" "C|N" "E|N" "B|E" "B|M" "A|M"
trajet
 [1]  1  3  8  9  4  2  6 10 11 12 16 15 14 13 26 27 18 19 28 29 17 25 24  7 22 23  5 21 20

Let us try now on a real network of streets. Like Missoula, Montana.

I will not try to get the shapefile of the city, I will just try to replicate the photography above.

If you look carefully, you will see some problem : 10 and 93 have an odd number of vertices (3 here), so one strategy is to connect them (which explains the grey line).

But actually, to be more realistic, we start in 93, and we end in 10. Here is the optimal (shortest) path which goes through all vertices.

Now, we are ready for Halloween, to go through all streets in the neighborhood !

Acheter un billet de train (pas trop cher)

Hier, je suis tombé sur article qui discutait des prix des billets de train, en France (et du prix très élevé, a certaines dates, genre pendant les vacances d’hiver). Tous ceux qui ont l’habitude de prendre le train savent que le prix que l’on paye dépend du moment ou on achète le billet (et de la souplesse que l’on peut avoir sur l’heure (voire la date) du trajet). Cet été, dans le cadre du projet de la formation Data Science pour l’Actuariat, pour mon cours, Pierre proposait de moissonner le site https://www.oui.sncf/ pour suivre un peu l’évolution du prix des billets.

Le principal soucis est que le site https://www.oui.sncf/ s’appuie sur du javascript pour les formulaires de saisie et pour l’affichage des résultats, ce qui empêche l’utilisation classique du package rvest, par exemple. J’avais évoqué dans un autre billet l’utilisation de wdman, pour scraper le site des incendies de forets. Ici, Pierre proposait de passer par casperjs, et je vais reprendre un peu ici sa stratégie:

  • on va utiliser casperjs, un émulateur de navigateur écrit en javascript. Il permet d’émuler un véritable navigateur (même moteur que google chrome) et de résoudre le javascript intégrés à la page
  • on va utiliser un petit code en bash pour lancer le code

Pour le code en bash, c’est juste que je suis sous mac et linux. Sous mac, on peut faire pas mal de choses en bash… Tout passe par des variables, que l’on peut définir, et afficher, par exemple

qui nous donne l’heure. Je peux aussi définir une variable, et l’incrémenter (pratique pour faire des boucles)

Plus intéressant, on peut planifier des taches. Pour ça, on tape

qui va ouvrir un éditeur,

Je demande ici de lancer un code R, tous les heures, a 13:50, 14:50, 15:50, etc. Pour demander tous les jours a 13:50, je tape

on va ensuite sauver l’instruction

On voit que la commande sera lancée, tous les jours : elle est dans la liste des taches a faire

notons qu’on peut lancer un code tout en passant des arguments : ici je lui dis quel objet manipuler (le premier argument) et le second sert pour créer un fichier (et pour le nommer).

Bref, c’est assez facile de lancer automatiquement des codes, pour scraper. Sous windows, on passe par le planificateur de taches. Un premier script permet d’extraire les trains ouverts dans la journée,

et le second, les trains ouverts depuis plus de 24 heures,

Ensuite, on va utiliser http://docs.casperjs.org/en/latest/ pour coder notre émulateur de navigateur internet (le code est ici en ligne).

On va ainsi créer plein de fichiers, contenant les informations que l’on veut ! Je vais passer un peu le retraitement, et juste présenter les informations qu’on peut en tirer. En particulier, Pierre avait stocke les données entre mars et juin dernier.

Ici, on est juste sur quelques trajets de grande ligne,

library(readr)
library(rgdal)
nomFichier = tempfile(fileext = ".zip")
  download.file("https://freakonometrics.free.fr/CarteFrance.zip", destfile = nomFichier, mode = "wb")
  unzip(zipfile = nomFichier, exdir = getwd())
  download.file("https://freakonometrics.free.fr/LgTroncons.csv", destfile = "LgTroncons.csv", mode = "wb")
  download.file("https://freakonometrics.free.fr/CoordVilles.csv", destfile = "CoordVilles.csv", mode = "wb")
  fra0 = readOGR(dsn = paste(getwd(), "/CarteFrance", sep = ""), layer = "gadm36_FRA_0", verbose = F)
  LgTroncons = read_delim("LgTroncons.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
CoordVilles = read_delim("CoordVilles.csv",";", escape_double = FALSE,locale = locale(decimal_mark = ","), trim_ws = TRUE)
NomsVilles = CoordVilles[CoordVilles$NOM_A_AFFICHER==1,]
library(ggplot2)
fr_df = fortify(fra0)
ggp = ggplot() + geom_polygon(data=fr_df, aes(long, lat,group = group), fill = "#3A8EBA") 
  ggp = ggp + geom_path(data = LgTroncons, aes(x = LONG, y = LAT, group = ID_TRONCON), colour= "#CC5500", lineend = "round", size=3) + geom_path(data = LgTroncons, aes(x=LONG, y=LAT, group = ID_TRONCON), colour="white", lineend = "round",  size=1.75)
  ggp = ggp + geom_point(data = NomsVilles, aes(x=LONG, y=LAT), colour = "blue", fill = "white", shape=21, size = NomsVilles$PT_SIZE, stroke = NomsVilles$PT_STROKE) + theme_void()
  ggp = ggp + geom_text(data = NomsVilles, aes(x=LONG, y=LAT, label=NOM),hjust = NomsVilles$H_AJUST,
vjust=NomsVilles$V_AJUST, colour = "white", fontface = "bold", size =3.25)+coord_fixed(1.47)
  ggp = ggp + ggtitle("Représentation des trajets étudiées") + theme(plot.title = element_text(hjust = 0.5, face="bold"))
  print(ggp)

On va travailler sur les trajets suivants,

Comparons ici les billets pour un trajet Pars-Rennes, a partir des informations moissonnées pendant 3 mois, le vendredi soir, en particulier 2 vendredi de juin 2018 (les 15 et 22 juin). Pour ces deux jours, il y a eu 6 trains, entre 17 et 20 heures. Pour le 15 juin, les trois premiers ont commencé avec un prix de 45 €. Pour le 22 juin, le premier a commence a 45 €, mais les deux suivant on été lancés a 33 €. Assez rapidement, les prix sont monte a 45 €.

On peut regarder l’evolution du prix

Si on regarde plusieurs destination, on observe des comportement très différentes,

  • pour Le Mans, les prix montent très vite, commençant a 15 €, montant a 18 € 10 heures après le lancement, 21 € le lendemdemain, 27 € au bout d’une semaine. En un mois, les prix ont presque doublé.
  • pour Rennes, on observe une evolution similaire, passant de 20 € a 25 € en quelques heures, et 40 € deux semaines après !
  • pour Toulouse en revanche, le prix initial est plus haut, 43 €, monte de 3 € en 10 heures, 6 € en 16 heures, pour rester a 49 €

Mais pour toutes les destinations, les prix sont croissants.

soit graphiquement

On peut aussi faire une carte. Si on regarde les prix a l’ouverture, Lille, Le Mans et Rennes sont peu chères.

Et les plus fortes variations sur 10 heures sont observées sur Nantes et Bordeaux.

Amusant non ?

Monte Carlo techniques to create counterfactuals

In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .

There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.

I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
X=Davis$height[Davis$sex=="M"]

We can visualize its distribution (density and cumulative distribution)

u=seq(155,205,by=.5)
par(mfrow=c(1,2))
hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
lines(u,dnorm(u,178,6.5),col="black")
Xs=sort(X)
n=length(X)
p=(1:n)/(n+1)
plot(Xs,p,type="s",col="blue")
lines(u,pnorm(u,178,6.5),col="black")

Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals

hist(X,col=rgb(0,0,1,.3))
lines(density(X),col="blue",lwd=2)
  Y=rnorm(n,178,6.5)
  hist(Y,col=rgb(1,0,0,.3))
  lines(density(Y),col="red",lwd=2)
Ys=sort(Y)
plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205))
polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA)
lines(Xs,p,type="s",col="blue",lwd=2)
lines(Ys,p,type="s",col="red",lwd=2)

We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.

d=rep(NA,1e5)
for(s in 1:1e5){
d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic
}
ds=density(d)
plot(ds,xlab="",ylab="")
dks=ks.test(X,"pnorm",178,6.5)$statistic
id=which(ds$x&gt;dks)
polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA)
abline(v=dks,col="red")

If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed

mean(d&gt;dks)
[1] 0.78248

is the computational version of the p-value

ks.test(X,"pnorm",178,6.5)
 
	One-sample Kolmogorov-Smirnov test
 
data:  X
D = 0.068182, p-value = 0.8079
alternative hypothesis: two-sided

I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).

October, grant proposal season

In 2012, Danielle Herbert, Adrian Barnett, Philip Clarke and Nicholas Graves published an article entitled “on the time spent preparing grant proposals: an observational study of Australian researchers“, whose conclusions had been included in Nature under a more explicit title, “Australia’s grant system wastes time” ! In this study, they included 3700 grant applications sent to the National Health and Medical Research Council, and showed that each application represented 37 working days: “Extrapolating this to all 3,727 submitted proposals gives an estimated 550 working years of researchers’ time (95% confidence interval, 513-589)“. But in these times when I have to write my funding application, I find that losing 37 days of work is huge. Because it’s become the norm! And somehow, it’s sad.

Forget about the crazy idea that I would rather, in fact, spend more time doing my research. In fact, the thought I had this morning was that it is rather sad that in the Faculty of Science, mathematicians are asked to spend a considerable amount of time, comparable to that required of physicists or chemists, for often smaller amounts of funding… And I thought it could be easily verified. We start by retrieving the discipline codes

url="http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSC-ResultatsCSS_eng.asp"
download.file(url,destfile = "GSC.html")
library(XML)
tables=readHTMLTable("GSC.html")
GSC=tables[[1]]$V1
GSC=as.character(GSC[-(1:2)])
namesGSC=tables[[1]]$V2
namesGSC=as.character(namesGSC[-(1:2)])

We’re going to need a small function, to remove the $ and other symbols that pollute the data (and prevent them from being treated as numbers)

library(stringr)
Correction = function(x) as.numeric(gsub('[$,]', '', x))

We will now read the 12 pages, and harvest (we will just take the 2017 data, but we could go back a few years before)

grants= function(gsc){
     url=paste("http://www.nserc-crsng.gc.ca/NSERC-CRSNG/FundingDecisions-DecisionsFinancement/ResearchGrants-SubventionsDeRecherche/ResultsGSCDetail-ResultatsCSSDetails_eng.asp?Year=2017&amp;GSC=",gsc,sep="")
    download.file(url,destfile = "GSC.html")
    library(XML)
    tables=readHTMLTable("GSC.html")
    X=as.character(tables[[1]]$"Awarded Amount")
    A=as.numeric(Vectorize(Correction)(X))
return(c(median(A),mean(A),as.numeric(quantile(A,(1:99)/100))))
}
M=Vectorize(grants)(GSC[1:12])

The average amounts of individual grants can be compared,

barplot(M[2,])

In mathematics, the average grant amount is $24400. If we normalize by this quantity, we obtain

barplot(M[2,]/M[2,8])

In other words, the average amount of a (individual) grant in chemistry (to pay for students, conferences, etc.) is twice that in mathematics, 60% higher in physics than in maths…

We can also look at the median values (rather than the averages)

barplot(M[1,])

Here again, it is in mathematics that it is the weakest….

barplot(M[1,]/M[1,8])

in comparable proportions. If we think that the time spent writing should be proportional to the amount allocated, we should spend half as much time in math as in chemistry.

Cumulative functions can also be ploted,

plot(M[3:101,8],(1:99)/100,type="s",xlim=range(M))
lines(M[3:101,5],(1:99)/100,type="s",col="red")
lines(M[3:101,4],(1:99)/100,type="s",col="blue")

with math in black, physics in red, and chemistry in blue. What is surprising is the bottom part: a “bad” researcher in chemistry or physics will earn more than the median researcher in mathematics…

Now that my intuition is confirmed, I have to go back, writing my proposal… and explain to my coauthors that I have to postpone some research projects because, well, you know…

Combining automatically factor levels in R

Each time we face real applications in an applied econometrics course, we have to deal with categorial variables. And the same question arise, from students : how can we combine automatically factor levels ? Is there a simple R function ?

I did upload a few blog posts, over the pas years. But so far, nothing satistfying. Let me write down a few lines about what could be done. And if some wants to write a nice R function, that would be awesome. To illustrate the idea, consider the following (simulated dataset)

n=200
set.seed(1)
x1=runif(n)
x2=runif(n)
y=1+2*x1-x2+rnorm(n,0,.2)
LB=sample(LETTERS[1:10])
b=data.frame(y=y,x1=x1,
             x2=cut(x2,breaks=
             c(-1,.05,.1,.2,.35,.4,.55,.65,.8,.9,2),
             labels=LB))
str(b)
'data.frame':	200 obs. of  3 variables:
 $ y : num  1.345 1.863 1.946 2.481 0.765 ...
 $ x1: num  0.266 0.372 0.573 0.908 0.202 ...
 $ x2: Factor w/ 10 levels "I","A","H","F",..: 4 4 6 4 3 6 7 3 4 8 ...
table(b$x2)[LETTERS[1:10]]
 
 A  B  C  D  E  F  G  H  I  J 
11 12 23 34 23 36 12 32  3 14

There is one (continuous) dependent variable y, one continuous covariable x_1 and one categorical variable x_2, with here ten levels. We can plot the data using

plot(b$x1,y,col="white",xlim=c(0,1.1))
text(b$x1,y,as.character(b$x2),cex=.5)

The output of a linear regression yield the following predictions

for(i in 1:10){
p=function(x) predict(lm(y~x1+x2,data=b),newdata=data.frame(x1=x,x2=LETTERS[i]))
u=seq(-1,1.065,by=.01)
v=Vectorize(p)(u)
lines(u,v)}

the slope for x_1 is the same, we simply add a different constant for each level. As we can see, some levels are very very close, so it seems legitimate to combine them into one single category. Here is the output of the linear regression,

summary(lm(y~x1+x2,data=b))
Coefficients:
             Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.843802   0.119655   7.052 3.23e-11 ***
x1           1.992878   0.053838  37.016  &lt; 2e-16 ***
x2A          0.055500   0.131173   0.423   0.6727    
x2H          0.009293   0.121626   0.076   0.9392    
x2F         -0.177002   0.121020  -1.463   0.1452    
x2B         -0.218152   0.130192  -1.676   0.0955 .  
x2D         -0.206970   0.121294  -1.706   0.0896 .  
x2G         -0.407417   0.129999  -3.134   0.0020 ** 
x2C         -0.526708   0.123690  -4.258 3.24e-05 ***
x2J         -0.664281   0.128126  -5.185 5.54e-07 ***
x2E         -0.816454   0.123625  -6.604 3.94e-10 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2014 on 189 degrees of freedom
Multiple R-squared:  0.8995,	Adjusted R-squared:  0.8942 
F-statistic: 169.1 on 10 and 189 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -60.74443
BIC(lm(y~x1+x2,data=b))
[1] -21.16463

Here the reference category is “I”. And it looks like we could actually combine that category with several others. One strategy here would be to select all categories that seem to be not significantly different, and to run a (multiple) test

library(car)
linearHypothesis(lm(y~x1+x2,data=b), c("x2A = 0", "x2H = 0", "x2F = 0"))
 
Hypothesis:
x2A = 0
x2H = 0
x2F = 0
 
Model 1: restricted model
Model 2: y ~ x1 + x2
 
  Res.Df    RSS Df Sum of Sq      F Pr(&gt;F)    
1    192 8.4651                               
2    189 7.6654  3   0.79971 6.5726  3e-04 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1

It seems that we can combine those four categories together.

Here, we can see what’s going on when we change the reference category (actually, loop on all categories)

P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERS[1:10]
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
text(1:10,0,LETTERS[1:10])
text(0,1:10,LETTERS[1:10])
for(i in 1:nlevels(b$x2)){
#levels(b$x2)=LETTERS[1:10]
b$x2=relevel(b$x2,LETTERS[i])
p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
names(p)=substr(names(p),3,3)
P[LETTERS[i],names(p)]=p
p=P[LETTERS[i],]
idx=which(p&gt;.05)
points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
idx=which(p&gt;.1)
points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)}

We are glad to see that it is symmetric : if “H” should be combined with “I”, “I” should also be combined with “H”.

Here black points are related with the 10% p-value, and white points the 5% p-value. This graph is actually hard to read… And actually, this reminds us of  Bertin (1967).

Here, we can predefine manually some ordering (we will see below how it might be automatised)

LETTERSord=c("I","A","H","F","B","D","G","C","J","E")
P=matrix(NA,nlevels(b$x2),nlevels(b$x2))
colnames(P)=rownames(P)=LETTERSord
plot(1:nlevels(b$x2),1:nlevels(b$x2),col="white",xlab="",ylab="",axes=F,xlim=c(0,10.5),
     ylim=c(0,10.5))
ct=c(3,3,2,1,1)
abline(v=.5+c(0,cumsum(ct)),lty=2)
abline(h=.5+c(0,cumsum(ct)),lty=2)
text(1:10,0,LETTERSord)
text(0,1:10,LETTERSord)
for(i in 1:nlevels(b$x2)){
  #levels(b$x2)=LETTERS[1:10]
  b$x2=relevel(b$x2,LETTERSord[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,3)
  P[LETTERSord[i],names(p)]=p
  p=P[LETTERSord[i],]
  idx=which(p&gt;.05)
  points(((1:10))[idx],rep(i,length(idx)),pch=1,cex=2)
  idx=which(p&gt;.1)
  points(((1:10))[idx],rep(i,length(idx)),pch=19,cex=2)
}

Here we get the following

It looks like we have our combined categories…

Actually, it is possible to use another strategy. We start from some level, say “A”. Then, we merge it with all non-significantly different levels. If “B” is not one of them, we use it as the new reference. Etc.

for(i in 1:nlevels(b$x2)){
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}

The final categories are

table(b$x2)
 
A+I+H B+D+F   C+G     E     J 
   46    82    35    23    14

with the following regression output

summary(lm(y~x1+x2,data=b))
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  0.86407    0.03950  21.877  &lt; 2e-16 ***
x1           1.99180    0.05323  37.417  &lt; 2e-16 ***
x2B+D+F     -0.21517    0.03699  -5.817 2.44e-08 ***
x2C+G       -0.50545    0.04528 -11.164  &lt; 2e-16 ***
x2E         -0.83617    0.05128 -16.305  &lt; 2e-16 ***
x2J         -0.68398    0.06131 -11.156  &lt; 2e-16 ***
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 0.2008 on 194 degrees of freedom
Multiple R-squared:  0.8975,	Adjusted R-squared:  0.8948 
F-statistic: 339.6 on 5 and 194 DF,  p-value: &lt; 2.2e-16
AIC(lm(y~x1+x2,data=b))
[1] -66.76939
BIC(lm(y~x1+x2,data=b))
[1] -43.68117

Which is consistent with the group we got before. But actually, if we change the order, we can get different combinations. For instance, if we go from “J” to “A”, instead of “A” to “J”, we obtain

for(i in nlevels(b$x2):1){
  #levels(b$x2)=LETTERS[1:10]
  if(LETTERS[i]%in%levels(b$x2)){
  b$x2=relevel(b$x2,LETTERS[i])
  p=summary(lm(y~x1+x2,data=b))$coefficients[-(1:2),4]
  names(p)=substr(names(p),3,nchar(p))
  idx=which(p&gt;.05)
  mix=c(LETTERS[i],names(p)[idx])
  b$x2=recode(b$x2, paste("c('",paste(mix,collapse = "','"),"')='",paste(mix,collapse = "+"),"'",sep=""))
}}
table(b$x2)
 
          E         G+C I+A+B+D+F+H           J 
         23          35         128          14

with different information criteria here

AIC(lm(y~x1+x2,data=b))
[1] -36.61665
BIC(lm(y~x1+x2,data=b))
[1] -16.82675

I guess it would be necessary to run randomly the order we go through the levels. Last, but not least, one can use regression trees (even if it not per se in the syllabus of the course). The problem is that there is another explanatory variable that might interphere. So I would suggest (1) to fit a linear model y=\beta_0+\beta_1x_1+u_i, to calculate the residuals, \widehat{u}_i (2) to run a regression tree, to explain \widehat{u}_i with categorical variable x_2 (I did explain how trees are build when the explanatory variable is a categorical one in a previous post)

library(rpart)
library(rpart.plot)
b$e=residuals(lm(y~x1,data=b))
arbre=rpart(e~x2,data=b)
prp(arbre,type=2,extra=1)

Observe that the leaves have the same groups as the one we got.

arbre
n= 200 
 
node), split, n, deviance, yval
      * denotes terminal node
 
1) root 200 22.563500  7.771561e-18  
  2) x2=G,C,J,E 72  4.441495 -3.232525e-01  
    4) x2=J,E 37  1.553520 -4.578492e-01 *
    5) x2=G,C 35  1.509068 -1.809646e-01 *
  3) x2=I,A,H,F,B,D 128  6.366628  1.818295e-01  
    6) x2=F,B,D 82  2.983381  1.048246e-01 *
    7) x2=I,A,H 46  2.030229  3.190993e-01 *

I guess that it should be possible to put all that in an R function, to suggest combinations of level that might improve the regression.

Traitement des valeurs manquantes : remplacer les NA par une constante ?

Un rapide billet pour répondre à une question posée à la fin du cours de ce matin (ST5100), par Jean-Pierre Liégeois, jeune lecteur du Var (pour préserver un peu d’anonymat)

Dans mon stage, quand on avait des valeurs manquantes, on me disait de remplacer par -1, puis de rajouter une indicatrice comme quoi la variable vaut -1. Ça permet de ne supprimer ni variable, ni observations. On peut faire ça ?

Si je formalise un peu, on va simuler ici des données, disons x_1 et x_2, on génère ensuite des données suivant un modèle, de la forme y=\beta_0+\beta_1x_1+\beta_2x_2+\varepsilon. Une proportion \alpha de x_1 sera transformée en NA.  Ce que suggérais Jean-Pierre, c’est de remplacer les valeurs manquantes par -1, puis d’ajuster un modèle y=\beta_0+\beta_1x_1+\beta_{-1}\mathbf{1}(x_1=-1)+\beta_2x_2+\varepsilon. Côté code, c’est assez simple. Par défaut, la stratégie de R est de supprimer les valeurs manquantes. Si 50% des données de x_1 sont manquantes, la moitié des lignes sont supprimées

n=1000
x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.05
indice=sample(1:n,size=round(n*alpha))
base=data.frame(y=y,x1=x1)
base$x1[indice]=NA
reg=lm(y~x1+x2,data=base)

Au lieu de générer un unique échantillon, on va en simuler 10,000, et regarder la distribution de \widehat{\beta}_1,

m=10000
B=rep(NA,m)
for(s in 1:m){
  x1=runif(n)
  x2=runif(n)
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.5
  indice=sample(1:n,size=round(n*alpha))
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=NA
  reg=lm(y~x1+x2,data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white",xlab="missing values = 50%")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Bien sur, avec un taux de valeurs manquantes plus faibles – disons \alpha=5\% – on perd moins d’observations, et donc l’estimateur a une variance plus faible.

Tentons maintenant la stratégie consistant à remplacer les valeurs manquantes par des valeurs numériques fixes, et de rajouter une indicatrice,

B=rep(NA,m)
for(s in 1:m){
  x1=runif(n)
  x2=runif(n)
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.5
  indice=sample(1:n,size=round(n*alpha))
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=-1
  reg=lm(y~x1+x2+I(x1==(-1)),data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Ce qui ne change pas grand chose, on en conviendra…  y compris si le taux de valeurs manquantes passe à 5%,

On peut se demander ce qui se passe si le shift n’est plus de 1 mais de 10 (a priori, c’est arbitraire, la variable x_1  pouvant être plus ou moins dispersée… -1 pour une variable entre 0 et 1, ou entre 0 et 1000, ça n’est pas tout à fait pareil). Mais non, par exemple avec toujours 5% de valeurs manquantes, on a

Si on regarde notre échantillon, en particulier le nuage de points  (x_1,y), on observe

ici, les valeurs manquantes sont choisies au hasard, de manière totalement indépendante,

x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.3333333
indice=sample(1:n,size=round(n*alpha))
clr=rep("black",n)
clr[indice]="red"
plot(x1,y,col=clr)

(ici avec 1/3 de valeurs manquantes, en rouge). Mais on pourrait supposer que les valeurs manquantes sont les plus grandes valeurs de x_1, par exemple,

x1=runif(n)
x2=runif(n)
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.3333333
indice=sample(1:n,size=round(n*alpha),prob = x1^3)
clr=rep("black",n)
clr[indice]="red"
plot(x1,y,col=clr)

On peut se demander ce que ça donnerait sur l’estimateur \widehat{\beta}_1

Ça ne change pas grand chose, mais on a plus de variance, si on regarde bien. Dernier essai : que se passe-t-il si les variables x_1 et x_2 sont maintenant corrélées,

B=rep(NA,m)
library(mnormt)
r=.8
S = matrix(c(1,r,r,1),2,2)
for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=-1
  reg=lm(y~x1+x2+I(x1==(-1)),data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Cette fois, on a un estimateur biaisé (de l’ordre de 10% sur cet exemple numérique). Manifestement, cette technique n’est pas très concluante…

Je pourrais ajouter que cette méthode ne revient pas à la première, même si la distribution des estimateurs est proche

set.seed(1)
x=rmnorm(n,varcov = S)
x1=pnorm(x[,1])
x2=pnorm(x[,2])
e=rnorm(n,.2)
y=1+2*x1-x2+e
alpha=.2
indice=sample(1:n,size=round(n*alpha),prob = x1^3)
base=data.frame(y=y,x1=x1,x2=x2)
base$x1[indice]=-1
reg1=lm(y~x1+x2+I(x1==(-1)),data=base)
coefficients(reg1)
      (Intercept)                x1                x2 I(x1 == (-1))TRUE 
        1.0988005         1.7454385        -0.5149477         3.1000668 
base$x1[indice]=NA
reg2=lm(y~x1+x2,data=base)
coefficients(reg2)
(Intercept)          x1          x2 
  1.1123953   1.8612882  -0.6548206

Comme je le disais (lors de la discussion qui a suivi le cours) une méthode plus prometteuse est l’imputation. L’idée est de prédire une valeur pour les x_1 manquants. On pourrait être tenté de prendre  \overline{x}_1, la moyenne des x_1 observés. Mais on sait que les valeurs manquantes sont justement pour les grandes valeurs de x_1, ici, donc on doit pouvoir faire mieux ! On sait aussi que x_1  et x_2 sont corrélés ici. Corrélés positivement, en plus. Autrement dit, si x_2 est grand, on sait que le x_1 (non observé) devait être grand. Le plus simple est de faire un modèle linéaire, x_1=\alpha_0+\alpha_2x_2+\eta_i, calibré sur les valeurs non-manquantes. Puis on utilise \widehat{x}_1=\widehat{\alpha}_0+\widehat{\alpha}_2x_2 pour les valeurs manquantes. C’est simpliste, mais pourquoi pas ? On estime alors le modèle sur cette nouvelle base.

for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
    base$x1[indice]=NA
    reg0=lm(x1~x2,data=base[-indice,])
    base$x1[indice]=predict(reg0,newdata=base[indice,])
  reg=lm(y~x1+x2,data=base)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

sur l’exemple numérique, on obtient

base$x1[indice]=NA
reg0=lm(x1~x2,data=base[-indice,])
base$x1[indice]=predict(reg0,newdata=base[indice,])
reg3=lm(y~x1+x2,data=base)
coefficients(reg3)
(Intercept)          x1          x2 
  1.1593298   1.8612882  -0.6320339

Cette méthode a au moins pu corriger du biais…

Après, si on regarde attentivement, on a exactement la même valeur qu’avec la première méthode qui consiste à supprimer les lignes avec des valeurs manquantes !

summary(reg3)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  1.15933    0.06649  17.435  &lt; 2e-16 ***
x1           1.86129    0.21967   8.473  &lt; 2e-16 ***
x2          -0.63203    0.20148  -3.137  0.00176 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.051 on 997 degrees of freedom
Multiple R-squared:  0.1094,	Adjusted R-squared:  0.1076 
F-statistic: 61.23 on 2 and 997 DF,  p-value: &lt; 2.2e-16 
 
summary(reg2) 
 
Coefficients: Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  1.11240    0.06878  16.173  &lt; 2e-16 ***
x1           1.86129    0.21666   8.591  &lt; 2e-16 ***
x2          -0.65482    0.20820  -3.145  0.00172 ** 
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 1.037 on 797 degrees of freedom
  (200 observations deleted due to missingness)
Multiple R-squared:  0.1223,	Adjusted R-squared:   0.12 
F-statistic:  55.5 on 2 and 797 DF,  p-value: &lt; 2.2e-16

Au lieu de faire une régression linéaire, on peut utiliser une autre méthode d’imputation. Par exemple prendre la moyenne des k valeurs de x_1 (observées) pour les k individus ayant des x_2 les plus proches du x_2 de l’individu ayant x_1 manquant :

kpp=function(i,basena,k=5){
  x2=basena$x2[i]
  sb=basena[!is.na(basena$x1),]
  idx=rank(abs(sb$x2-x2))
  mean(sb[which(idx&lt;=k),"x1"])
}

Sur notre base simulée on obtient

base$x1[indice]=NA
base0=base
for(j in indice) base0$x1[j]=kpp(j,base0,k=5)
reg4=lm(y~x1+x2,data=base)
coefficients(reg4)
(Intercept)          x1          x2 
   1.197944    1.804220   -0.806766

Si on regarde ce que ça donne sur nos 10,000 simulations, on a (c’est un peu long, car j’ai codé ça très rapidement, et pas du tout de manière optimale)

for(s in 1:m){
  x=rmnorm(n,varcov = S)
  x1=pnorm(x[,1])
  x2=pnorm(x[,2])
  e=rnorm(n,.2)
  y=1+2*x1-x2+e
  alpha=.2
  indice=sample(1:n,size=round(n*alpha),prob = x1^3)
  base=data.frame(y=y,x1=x1,x2=x2)
  base$x1[indice]=NA
  base0=base
    for(j in indice) base0$x1[j]=kpp(j,base0,k=5)
  reg=lm(y~x1+x2,data=base0)
  B[s]=coefficients(reg)[2]
}
hist(B,probability=TRUE,col=rgb(0,0,1,.4),border="white")
lines(density(B),lwd=2,col="blue")
abline(v=2,lty=2,col="red")

Le biais semble ici plus faible que sans imputation… autrement dit, les méthodes d’imputation me semblent plus robustes que cette stratégie visant à remplacer des NA par une valeur arbitraire, et de rajouter une indicatrice dans la régression.

« Dans toute statistique, l’inexactitude du nombre est compensée par la précision des décimales »

Le statisticien et économiste Alfred Sauvy est resté dans les mémoires pour avoir inventé en 1952 le terme “tiers-monde”. Mais on lui a aussi attribué la paternité de la phrase suivante « dans toute statistique, l’inexactitude du nombre est compensée par la précision des décimales ». J’ai du l’entendre alors que j’étais étudiant, et depuis, elle me suit partout.

J’y repensais l’autre jour, quand Mathieu Gallard mentionnait sur Twitter le graphique suivant (correspondant a la popularité du président de la république, en France, dans les 18 mois qui suivent l’élection).

Le tweet disait (entre autres) “ est à ce stade de son quinquennat légèrement moins populaire que “. Pour rappel, ces courbes “de popularité” sont construites par un sondage, avec a chaque fois environ 1000 personnes interrogées (“sur la taille de l’échantillon on est toujours entre 950 et 970 interviews” me disait Mathieu). Bon, tous ceux qui ont des souvenirs de cours de stats se souviennent qu’avec 1000 personnes interrogées, 2 points de différence, c’est rarement significatif. Mais plus globalement, compte tenu de la marge d’erreur, je me suis demande pourquoi les courbes n’étaient pas lissées ? Ça éviterait les discussions stériles pour une variation de 2 points par exemple..

Si on reprend les données brutes (merci Mathieu), on a ici

rate=read.csv2("http://freakonometrics.free.fr/satisfaction.csv")
plot(rate[,2],type="b",ylim=c(.2,.75),xlim=c(0,20),pch=19)
lines(rate[,3],type="b",col="red")
lines(rate[,4],type="b",col="blue")
lines(rate[,5],type="b",col="dark green")
text(18.15,rate[17,2],"EM")
text(18.15,rate[17,3],"FH",col="red")
text(18.15,rate[17,4],"NS",col="blue")
text(18.15,rate[17,5],"JC",col="dark green")

ce qui donne le même que dans le tweet (même si je n’ose pas interpoler linéairement les valeurs manquantes – il y en a deux dans mon fichier)

Prenons la courbe la plus récente, celle d’Emmanuel Macron (avec en plus les valeurs manquantes pour pimenter un peu) et rajoutons les intervalles de confiance ponctuels.

plot(rate$EM,type="b",ylim=c(.2,.5))
p=rate$EM
n=length(p)
arrows(1:n,p-2/sqrt(1000)*sqrt(p*(1-p)),1:n,p+2/sqrt(1000)*sqrt(p*(1-p)),code=3,angle=90,length=.1,col="blue")

Personnellement, j’aurais bien voulu (1) lisser tout ça, (2) rajouter quelque chose qui s’apparente a des bandes de confiance. Mais avec des erreurs de mesure (c’est comme ça qu’on peut interpréter le fait que les points viennent d’un sondage), je ne sais pas trop quoi faire. J’ai tenté la méthode suivante : le tire au hasard des points dans l’intervalle de confiance, puis je lisse sur ce nouveau jeu de points. Et je répète mille fois

library(mgcv)
Y=matrix(NA,73,1000)
for(s in 1:1000){
  x=(1:n)[!is.na(p)]
  pna=p[!is.na(p)]
  ps=rnorm(length(x),pna,1/sqrt(1000)*sqrt(pna*(1-pna)))
  b=data.frame(x=x,y=ps)
  reg=gam(y~s(x),data=b)
  yp=predict(reg,newdata=data.frame(x=seq(0,18,by=.25)))
  Y[,s]=yp
  if(s&lt;100) lines(seq(0,18,by=.25),yp,col="light blue")
} 
lines(seq(0,18,by=.25),apply(Y,1,mean),col="red",lwd=2)
lines(seq(0,18,by=.25),apply(Y,1,function(x) quantile(x,.95)),col="red",lty=2)
lines(seq(0,18,by=.25),apply(Y,1,function(x) quantile(x,.05)),col="red",lty=2)

On voit que notre courbe lissée est réaliste, voire même les pseudo-bandes de confiance autour. Pour obtenir ces trois courbes, on peut utiliser la fonction suivante

courbe=function(j=1){
p=rate[,1+j]
n=length(p)
Y=matrix(NA,73,1000)
for(s in 1:1000){
  x=(1:n)[!is.na(p)]
  pna=p[!is.na(p)]
  ps=rnorm(length(x),pna,1/sqrt(1000)*sqrt(pna*(1-pna)))
  b=data.frame(x=x,y=ps)
  reg=gam(y~s(x),data=b)
  yp=predict(reg,newdata=data.frame(x=seq(0,18,by=.25)))
  Y[,s]=yp
} 
data.frame(
x=seq(0,18,by=.25),
pred=apply(Y,1,mean),
upr=apply(Y,1,function(x) quantile(x,.975)),
lwr=apply(Y,1,function(x) quantile(x,.025)))
}

Sur les quatre colonnes de notre tableau, ça donne

plot(rate[,2],type="b",ylim=c(0,.8),xlim=c(0,20),col="white")
Y=courbe(4)
polygon(c(Y$x,rev(Y$x)),c(Y$upr,rev(Y$lwr)),col=rgb(0,1,0,.4),border=NA)
lines(Y$x,Y$pred,col="dark green",lwd=2)
text(18.65,Y$pred[73],"JC",col="dark green")
Y=courbe(3)
polygon(c(Y$x,rev(Y$x)),c(Y$upr,rev(Y$lwr)),col=rgb(0,0,1,.4),border=NA)
lines(Y$x,Y$pred,col="blue",lwd=2)
text(18.65,Y$pred[73],"NS",col="blue")
Y=courbe(2)
polygon(c(Y$x,rev(Y$x)),c(Y$upr,rev(Y$lwr)),col=rgb(1,0,0,.4),border=NA)
lines(Y$x,Y$pred,col="red",lwd=2)
text(18.65,Y$pred[73],"FH",col="red")
Y=courbe(1)
polygon(c(Y$x,rev(Y$x)),c(Y$upr,rev(Y$lwr)),col="grey",border=NA)
lines(Y$x,Y$pred,col="black",lwd=2)
text(18.65,Y$pred[73],"EM",col="black")

Pourquoi les instituts de sondages, qui produisent les courbes de popularité, ne montrent pas ce genre de courbes ? Elles sont – a mon avis – aussi justes que celles qu’ils fournissent, au centième près, jouant sur une précision que l’incertitude ne devrait pas autoriser…

Régression sur une variable qualitative et ANOVA

Ce matin, pour le cours STT5100, on évoquait la régression sur une variable catégorielle. En particulier, on avait commencé par regarder ce que donnerait la régression sans la constante, et son interprétation. On s’était appuyé sur la base des poids et des tailles des élèves, et de la variable de genre.

Davis=read.table(
  "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")
Davis[12,c(2,3)]=Davis[12,c(3,2)]
Davis=data.frame(Y=Davis$weight * 2.204622,
                 X1=Davis$sex)

On voulait estimer le modèle y_i =\beta_F\boldsymbol{1}_F(x_i)+\beta_H\boldsymbol{1}_H(x_i)+\varepsilon_iOn avait vu que l’on pouvait passer par l’écriture matricielle

 X=cbind(Davis$X1=='F',Davis$X1=='M') 
 Y=Davis$Y

car la matrice \mathbf{X}^T\mathbf{X} est inversible (une fois que l’on enlève la constante)

 solve(t(X)%*%X)
            [,1]       [,2]
[1,] 0.008928571 0.00000000
[2,] 0.000000000 0.01136364

et donc l’estimateur par moindres carrés est (classiquement)\widehat{\mathbf{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y}

 solve(t(X)%*%X) %*% (t(X)%*%Y)
         [,1]
[1,] 125.4272
[2,] 167.3258

ce qui correspond effectivement à la sortie de R,

 reg=lm(Y~0+X1,data=Davis)
 summary(reg)
 
Coefficients:
    Estimate Std. Error t value Pr(&gt;|t|)    
X1F  125.427      1.960   64.00   &lt;2e-16 ***
X1M  167.326      2.211   75.68   &lt;2e-16 ***

Considérons maintenant les deux sous-populations, avec le poids des femmes, et le poids des hommes

x=Y[X[,1]==1]
y=Y[X[,2]==1]
nx=length(x)
ny=length(y)

On avait vu en cours que les \widehat{\mathbf{\beta}} avaient une interprétation très simple, puisque\widehat{{\beta}}_M = \frac{1}{n_M}\sum_{i:x_i=M} y_iautrement dit \widehat{{\beta}}_M   est le poids moyen des hommes. Et en effet

 mean(y)
[1] 167.3258

C’est finalement très naturel, ou intuitif.

On peut maintenant s’interroger sur l’écart-type de l’estimateur de \widehat{{\beta}}_M . Intuitivement, on aurait envie d’avoir la variance de l’estimateur de la moyenne, soit ici

 sqrt(var(y)/ny)
[1] 2.794391
 sqrt(1/(ny-1)*sum( (y-mean(y))^2 )/ny)
[1] 2.794391

car pour rappel\text{Var}[\overline{y}]=\frac{\text{Var}(y)}{n}Comme on l’a vue dans le modèle de régression multiple, la variance de l’estimateur de \mathbf{\beta} est proportionnel à \sigma^2 , la variance globale des résidus (c’est l’hypothèse d’homoscédasticité ! les deux groupes doivent avoir la même variance). On va donc calculer l’estimateur naturel de \sigma^2

 s2=1/(nx+ny-2)*(sum( (x-mean(x))^2 )+sum( (y-mean(y))^2))
 sqrt(s2/ny)
[1] 2.210863

et en effet, on retombe sur la valeur donnée dans le tableau de régresion

 sqrt(s2/nx)
[1] 1.959721

(pareil pour l’autre coefficient).

On avait ensuite regardé la régression telle qu’elle faite classiquement, sous R : on garde la constante, et on enlève une des variables indicatrices (qui devient alors la “modalité de référence”).

 X=cbind(1,Davis$X1=='M')

Là encore, le modèle devient identifiable, et on obtient ici

 solve(t(X)%*%X) %*% (t(X)%*%Y)
          [,1]
[1,] 125.42724
[2,]  41.89855

On avait noté qu’il y avait un interprétation de cette seconde valeur, comme un différentiel par rapport à la modalité de référence

mean(y)-mean(x)
[1] 41.89855

La sortie de régression devient ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept)  125.427      1.960   64.00   &lt;2e-16 ***
X1M           41.899      2.954   14.18   &lt;2e-16 ***

Et comme je l’avais dit, le test de Student correspond ici à un test d’égalité entre la taille moyenne des hommes et celle des femmes. Et en effet, si on fait le test, on voit que la différence est significative, comme attendu (pour la même raison qu’au dessus, on suppose la même variance dans les deux groupes)

 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -6.4475, df = 286, p-value = 4.826e-10
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -30.62603 -16.30035
sample estimates:
mean of x mean of y 
 143.8626  167.3258

Je suis par contre un peu surpris que les p-values soient différente. Mon interprétation est que les p-values sont (de toutes façons) très faibles, et donc ça a peu d’importance. En fait, si on rend les deux variables indépendantes (par exemple en mélangeant la variable \mathbf{y} ), ça marche ! Posons

 Davis$Y=sample(Davis$Y)

ce qui revient à permuter toutes les observations de la variable dépendante (mais pas les autres !). La régression donne ici

 reg2=lm(Y~X1,data=Davis)
 summary(reg2)
 
Call:
lm(formula = Y ~ X1, data = Davis)
 
Residuals:
    Min      1Q  Median      3Q     Max 
-57.458 -22.184  -5.512  17.809 118.912 
 
Coefficients:
            Estimate Std. Error t value Pr(&gt;|t|)    
(Intercept) 143.4382     2.7820   51.56   &lt;2e-16 ***
X1M           0.9645     4.1940    0.23    0.818    
---
Signif. codes:  0***0.001**0.01*0.05 ‘.’ 0.1 ‘ ’ 1
 
Residual standard error: 29.44 on 198 degrees of freedom
Multiple R-squared:  0.000267,	Adjusted R-squared:  -0.004782 
F-statistic: 0.05289 on 1 and 198 DF,  p-value: 0.8183

autrement dit, le genre n’est plus significatif, avec une p-value de 81.8%. Ce qui est bien au dessus de 5%. Si on fait maintenant le test de comparaison de moyenne, sur les deux sous-groupes, on obtient

 Y=Davis$Y
 t.test(Y[X[,1]==1],Y[X[,2]==1],var.equal=TRUE)
 
	Two Sample t-test
 
data:  Y[X[, 1] == 1] and Y[X[, 2] == 1]
t = -0.22998, df = 198, p-value = 0.8183
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
 -9.235209  7.306165
sample estimates:
mean of x mean of y 
 143.4382  144.4027

et le test a ici également une p-value de 81.8%. Les deux tests sont donc rigoureusement équivalents.

Les transports publics parisiens

Histoire de continuer la série de billets sur la visualisation, et la manipulation de données ouvertes, je vais reprendre de codes de Tony, de la formation Data Science pour l’Actuariat, pour visualiser le transport dans Paris (et la région parisienne). Si j’ai le temps, dans les jours a venir, je ferais une analyse du réseau de métro, compare aux autres grandes villes européennes. Pour commencer, on va récupérer les données, fournies par le site d’open data du stif, le syndicat des transports d’Ile de France (https://opendata.stif.info). Les données sont découpées par semestre, ce qui rend le code un peu lourd… mais bon, ça n’est pas plus complique pour autant.

library(dplyr)
library(stringr)
library(ggplot2)
library(xlsx)
library(ggmap)

On commence par lire tous les fichiers en ligne

nbvalid = list()
download.file("https://opendata.stif.info/explore/dataset/emplacement-des-gares-idf-data-generalisee/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","Gares.csv")
gares = read.csv("Gares.csv", sep=";", header = TRUE)
distr_pers = list()
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires1.csv")
distr_pers$S1 = read.csv("horaires1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-profils-horaires-par-jour-type-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","horaires2.csv")
distr_pers$S2 = read.csv("horaires2.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-1er-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations1.csv")
nbvalid$S1 = read.csv("validations1.csv", sep=";", header = TRUE)
download.file("https://opendata.stif.info/explore/dataset/validations-sur-le-reseau-ferre-nombre-de-validations-par-jour-2e-sem/download/?format=csv&amp;timezone=Europe/Berlin&amp;use_labels_for_header=true","validations2.csv")
nbvalid$S2 = read.csv("validations2.csv", sep=";", header = TRUE)
download.file("https://freakonometrics.free.fr/Correspondance_NOM.csv","Correspondance_NOM.csv")
Cooresp = read.csv("Correspondance_NOM.csv", sep=";", header = TRUE)

On a ensuite besoin de définir les dates des vacances, pour 2017

Vacances = list()
Vacances$Noel = append(seq(from = as.Date("01/01/2017", format="%d/%m/%Y"), to=as.Date("02/01/2017", format="%d/%m/%Y"), by=1),seq(from = as.Date("24/12/2017", format="%d/%m/%Y"), to=as.Date("31/12/2017", format="%d/%m/%Y"), by=1))
Vacances$Ski = seq(from = as.Date("04/02/2017", format="%d/%m/%Y"), to=as.Date("19/02/2017", format="%d/%m/%Y"), by=1)
Vacances$Printemps = seq(from = as.Date("02/04/2017", format="%d/%m/%Y"), to=as.Date("17/04/2017", format="%d/%m/%Y"), by=1)
Vacances$Ete = seq(from = as.Date("08/07/2017", format="%d/%m/%Y"), to=as.Date("03/09/2017", format="%d/%m/%Y"), by=1)
Vacances$Toussaint = seq(from = as.Date("21/10/2017", format="%d/%m/%Y"), to=as.Date("05/11/2017", format="%d/%m/%Y"), by=1)
Vacances$All=Reduce(append,Vacances)

Après, un peu de nettoyage est nécessaire, avec des gares en double (par exemple quand passe a la fois le RER et le métro), et pour recuperer leur localisation spatiale (latitude et longitude)

gares$NOM_LONG = as.character(gares$NOM_LONG)
DD = (gares$NOM_LONG[duplicated(gares$NOM_LONG)])
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="Metro"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"M", sep="-")
i = (gares$NOM_LONG %in% DD) &amp; gares$MODE_=="RER"
gares$NOM_LONG[i] = paste(gares$NOM_LONG[i],"R", sep="-")
gares$NOM_LONG=factor(gares$NOM_LONG)
 
a=as.character(gares$Geo.Point)
gares$Y=as.numeric(str_extract_all(a,"^[0-9]+.[0-9]+"))
gares$X=as.numeric(str_extract_all(a,"[0-9]+.[0-9]+$"))

On compte ensuite les nombres de validation de tickets, par gare

Manip_nbvalid = function(Data,DD,gares) {
  i=grep("^[a-zA-Z]+",as.character(Data$NB_VALD))
  Data$NB_VALD[i]=as.integer(5)
  i=is.na(Data$NB_VALD)
  Data$NB_VALD[i]=as.integer(5)
  Data$LIBELLE_ARRET=as.character(Data$LIBELLE_ARRET)
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="100"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"M", sep="-")
  i=(Data$LIBELLE_ARRET %in% DD) &amp; Data$CODE_STIF_TRNS=="800"
  Data$LIBELLE_ARRET[i]=paste(Data$LIBELLE_ARRET[i],"R", sep="-")
 
  for (i in seq(1,nrow(Cooresp))) { Data$LIBELLE_ARRET=gsub(as.character(Cooresp$nbval[i]),as.character(Cooresp$gares[i]),Data$LIBELLE_ARRET)
  }
gares$NOM_LONG=as.character(gares$NOM_LONG)
Data=dplyr::left_join(Data,gares[,c("NOM_LONG","X","Y")],by=c("LIBELLE_ARRET"="NOM_LONG"))
  Data=Data[is.na(Data$CODE_STIF_ARRET)==FALSE,]
  Data=Data[Data$CODE_STIF_ARRET!="ND",]
  Data$NB_VALD=as.integer(as.character(Data$NB_VALD))
  Data$JOUR=as.Date(Data$JOUR)
  Data$CODE_STIF_TRNS=factor(Data$CODE_STIF_TRNS)
  Data$CODE_STIF_RES=factor(Data$CODE_STIF_RES)
  Data$CODE_STIF_ARRET=factor(Data$CODE_STIF_ARRET)
  Data$LIBELLE_ARRET=factor(Data$LIBELLE_ARRET)
  Data$ID_REFA_LDA=factor(Data$ID_REFA_LDA)
  Data$CATEGORIE_TITRE=factor(Data$CATEGORIE_TITRE)
  Data$JOURSEM=weekdays(Data$JOUR)  
  return(Data)
}
nbvalid=lapply(nbvalid, Manip_nbvalid,DD=DD,gares=gares)

On a ainsi tous les comptages, pour toutes les gares. On fait ensuite un découpage par tranche horaire

Manip_dist_pers = function(DataFrame) {
  DataFrame=DataFrame[(DataFrame$TRNC_HORR_60)!="ND",]
  DataFrame$TRNC_HORR_60=factor(DataFrame$TRNC_HORR_60, levels = c("0H-1H", "1H-2H", "2H-3H", "3H-4H", "4H-5H", "5H-6H", "6H-7H", "7H-8H", "8H-9H", "9H-10H", "10H-11H", "11H-12H", "12H-13H", "13H-14H", "14H-15H", "15H-16H", "16H-17H", "17H-18H", "18H-19H", "19H-20H", "20H-21H", "21H-22H", "22H-23H", "23H-0H")) 
  DataFrame=DataFrame[(DataFrame$CODE_STIF_ARRET)!="ND",]
  DataFrame$CODE_STIF_ARRET=factor(DataFrame$CODE_STIF_ARRET)
DataFrame$TRANCHE=str_extract(as.character(DataFrame$TRNC_HORR_60),"^[0-9]{1,2}")
  return(DataFrame)
}
distr_pers=lapply(distr_pers, Manip_dist_pers)

On peut ensuite recuperer la distribution des validation, par jour

distr_JOURV=list()
distr_JOURV$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CATEGORIE_TITRE) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOURV$Y=rbind(distr_JOURV$S1,distr_JOURV$S2)
distr_JOUR=list()
distr_JOUR$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM) %&gt;% summarise(NB_VALD=sum(NB_VALD))
distr_JOUR$Y=rbind(distr_JOUR$S1,distr_JOUR$S2)
distr_JOUR_Station=list()
distr_JOUR_Station$S1 = nbvalid$S1 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
distr_JOUR_Station$S2 = nbvalid$S2 %&gt;% group_by(JOUR, JOURSEM,CODE_STIF_ARRET,LIBELLE_ARRET) %&gt;% summarise(NB_VALD=sum(NB_VALD), X=max(X), Y=max(Y))
Manip_dist_Jour = function(DataFrame) {
  DataFrame$JOURSEM=factor(DataFrame$JOURSEM,levels = c("lundi","mardi","mercredi","jeudi","vendredi","samedi","dimanche"))
  DataFrame$TypeJ=character(nrow(DataFrame))
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ete]="Ete"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Noel]="Noel"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Ski]="Ski"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Printemps]="Printemps"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$Toussaint]="Toussaint"
  DataFrame$TypeJ[DataFrame$JOUR %in% Vacances$All == FALSE]="HorsVacances"
  DataFrame$CAT_JOUR=character(nrow(DataFrame))
  DFr=list()
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOVS"
  DFr$JOVS$Data = DataFrame[ii,]
  DFr$JOVS$Nom="Jours ouvrés Vacances Scolaires"
  ii=(DataFrame$JOURSEM!="samedi" &amp; DataFrame$JOURSEM!="dimanche") &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="JOHV"
  DFr$JOHV$Data = DataFrame[ii,]
  DFr$JOHV$Nom="Jours ouvés Hors Vacances Scolaires"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ!="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAVS"
  DFr$SAVS$Data = DataFrame[ii,]
  DFr$SAVS$Nom="Samedi VS"
  ii=DataFrame$JOURSEM=="samedi" &amp; DataFrame$TypeJ=="HorsVacances"
  DataFrame$CAT_JOUR[ii]="SAHV"
  DFr$SAHV$Data = DataFrame[ii,]
  DFr$SAHV$Nom="Samedi HV"
  ii=DataFrame$JOURSEM=="dimanche"
  DataFrame$CAT_JOUR[ii]="DIJFP"
  DFr$DIJFP$Data = DataFrame[ii,]
  DFr$DIJFP$Nom="Dimanche"
  return(list("TypeJ"=DFr,"Distr"=DataFrame))
}
res=Manip_dist_Jour(distr_JOUR$Y)
distr_TypeJ=res$TypeJ
distr_JOUR$Y=res$Distr
res=Manip_dist_Jour(distr_JOURV$Y)
distr_TypeJV=res$TypeJ
distr_TypeJ_Station=list()
res=Manip_dist_Jour(distr_JOUR_Station$S1)
distr_TypeJ_Station$S1=res$TypeJ
distr_JOUR_Station$S1=res$Distr
res=Manip_dist_Jour(distr_JOUR_Station$S2)
distr_TypeJ_Station$S2=res$TypeJ
distr_JOUR_Station$S2=res$Distr
rm(res)

On peut alors tracer toutes sortes de graphiques, par exemple le nombre de validations, par jour, entre le 1er janvier et le 31 décembre, en fonction du jour de la semaine.

g0 = ggplot(distr_JOUR$Y, aes(x=JOUR, y=NB_VALD, color = JOURSEM)) + geom_point()
g0 = g0 + labs(title="Nombres de validations chaque jours de 2017", x="Date", y="Nombre de validations")
g0

On peut voir la très forte baisse les jours de semaine pendant les vacances d’été. Au lieu de regarder sur l’année, on peut regarder sur la journée

Fct_FqH = function(DataFrame,distr_pers) {
DataFrame=dplyr::full_join(DataFrame,distr_pers[,c("CAT_JOUR","CODE_STIF_ARRET","pourc_validations","TRANCHE","TRNC_HORR_60")],by=c("CODE_STIF_ARRET"="CODE_STIF_ARRET","CAT_JOUR"="CAT_JOUR"))
  DataFrame$NB_VALD=DataFrame$NB_VALD*DataFrame$pourc_validations
  return(DataFrame)
}
distr_JOUR_Station$S1=Fct_FqH(distr_JOUR_Station$S1, distr_pers$S1)
distr_JOUR_Station$S2=Fct_FqH(distr_JOUR_Station$S2, distr_pers$S2)
distr_JOUR_Station$Y=rbind(distr_JOUR_Station$S1,distr_JOUR_Station$S2)
distr_JOUR_Station$Y=distr_JOUR_Station$Y[is.na(distr_JOUR_Station$Y$NB_VALD)==FALSE,]

On peut alors faire un graphique, en fonction de la tranche horaire, pour certaines périodes, par exemple en dehors de vacances scolaires, en semaine (par heure, on a ici un boxplot)

Graphique_HOR = function(DataFrame,TypeJ,NomJ) {
  # Graphique de la distribution de l'affluence par tranche horaire et type de jours
  g1 = ggplot(DataFrame[DataFrame$CAT_JOUR==TypeJ,], aes(x=TRNC_HORR_60, y=pourc_validations, color = TRNC_HORR_60,las=2)) + geom_boxplot() + ylim(c(0,100))
  g1 = g1 + labs(title=paste(c("Distribution des validations par tranche horaire ",NomJ), sep="", collapse = ""), x="Jours", y="Nombre de validations") +
  theme(axis.text.x= element_text(size = 8, angle = 45))
  g1
}
Graphique_HOR(distr_JOUR_Station$Y,"JOHV","Jours ouvrés Hors Vacances Scolaires")

ou bien le samedi

Graphique_HOR(distr_JOUR_Station$Y,"SAHV","Samedi Hors Vacances Scolaires")

On peut tenter un peu de cartographie. Comme nombre de métros/bus, dans le monde, on a souvent uniquement accès aux nœuds d’entrée dans le réseau (et pas aux nœuds de sortie). Mais ça reste intéressant, et très informatif

get_Paris1 = get_map(c(2.3448688,48.8613029), zoom = 11)
Paris1 = ggmap(get_Paris1)

Par gare, et par heure, on peut regarder le nombre de validations de tickets

Median_Valid = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
Median_Valid_Station = distr_JOUR_Station$Y %&gt;% group_by(CAT_JOUR, TRNC_HORR_60,LIBELLE_ARRET, X, Y) %&gt;% summarise(NB_VALD=median(NB_VALD))
 
Carte_Densite = function(Nom,Carte,TypeJ,HOR,DataFrame) {
if (HOR=="") {
    ii=DataFrame$CAT_JOUR==TypeJ
    NomSave=paste("Densité des validations",Nom,TypeJ)
  }
  else {
    ii=DataFrame$CAT_JOUR==TypeJ &amp; DataFrame$TRNC_HORR_60==HOR
    NomSave=paste("Densité des validations",Nom,TypeJ,HOR)
  }
  U=DataFrame[ii,]
  n=round(log10(median(U$NB_VALD)))-1
  n=max(1,10^n)
  Nb_Repete_Stations=ceiling(U$NB_VALD/n)
  U$Size_Stations=U$NB_VALD/max(U$NB_VALD)
  Z=U[rep(1:nrow(U),Nb_Repete_Stations),]
  Carte_A= Carte + geom_point(aes(x=X,y=Y),data=Z,col="coral", size=10*Z$Size_Stations) +
    geom_density2d(data = Z, aes(x=X,y=Y), size = 0.5) + 
    stat_density2d(data = Z, aes(x=X,y=Y,fill = ..level.., alpha = ..level..),size = 0.01, bins = 16, geom = "polygon") +
    scale_fill_gradient(low = "chartreuse", high = "red",guide = FALSE) + 
    scale_alpha(range = c(0, 0.3), guide = FALSE) + ggtitle(NomSave) +
    theme(axis.title.x = element_blank(), axis.title.y = element_blank(), axis.text.x= element_blank(), axis.text.y = element_blank())
 
  suppressWarnings(print(Carte_A))
}

Par exemple, si on regarde les points de validations de tickets entre 5 et 6 heures du matin, on obtient

L=levels(Median_Valid_Station$TRNC_HORR_60)
Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[6],Median_Valid_Station)

avec beaucoup de ville dans la banlieue proche. Plus tard en journée, entre 11 heures et midi, les gares de validation sont davantage dans le cœur de Paris, avec la Défense a gauche et Saint-Denis au nord

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[12],Median_Valid_Station)

En fin de journée, c’est Paris et surtout la Défense qui ressortent

Carte_Densite("dans la petite ceinture",Paris1,"JOHV",L[19],Median_Valid_Station)

Amusant, non ?