Tag Archives: png

Some sort of Otto Neurath (isotype picture) map

Yesterday evening, I was walking in Budapest, and I saw some nice map that was some sort of Otto Neurath style. It was hand-made but I thought it should be possible to do it in R, automatically.

A few years ago, Baptiste Coulmont published a nice blog post on the package osmar, that can be used to import OpenStreetMap objects (polygons, lines, etc) in R. We can start from there. More precisely, consider the city of Douai, in France,

The code to read information from OpenStreetMap is the following

src <- osmsource_api()
bb <- center_bbox(3.07758808135,50.37404355, 1000, 1000)
ua <- get_osm(bb, source = src)

We can extract a lot of things, like buildings, parks, churches, roads, etc. There are two kinds of objects so we will use two functions

listek = function(vc,type="polygons"){
nat_ids <- find(ua, way(tags(k %in% vc)))
nat_ids <- find_down(ua, way(nat_ids))
nat <- subset(ua, ids = nat_ids)
nat_poly <- as_sp(nat, type)}
listev = function(vc,type="polygons"){
  nat_ids <- find(ua, way(tags(v %in% vc)))
  nat_ids <- find_down(ua, way(nat_ids))
  nat <- subset(ua, ids = nat_ids)
  nat_poly <- as_sp(nat, type)}

For instance to get rivers, use


and to get buildings


We can also get churches


but also train stations, airports, universities, hospitals, etc. It is also possible to get streets, or roads


but it will be more difficult to use afterwards, so let’s forget about those.

We can check that we have everything we need

if(!is.null(B)) plot(B,add=TRUE,col="red")
if(!is.null(C)) plot(C,add=TRUE,col="purple")
if(!is.null(T)) plot(T,add=TRUE,col="red")

Now, let us consider a rectangular grid. If there is a river in a cell, I want a river. If there is a church, I want a church, etc. Since there will be one (and only one) picture per cell, there will be priorities. But first we have to check intersections with polygons, between our grid, and the OpenStreetMap polygons.

identification = function(xy,h,PLG){
  Pb1 = list(Polygons(list(pb1), ID=1))
  SPb1 = SpatialPolygons(Pb1, proj4string = CRS("+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs +towgs84=0,0,0"))

and then, we identify, as follows

whichidtf = function(xy,h){
if(!is.null(identification(xy,h,M))) label="HOUSE"
if(!is.null(identification(xy,h,P))) label="PARK"
if(!is.null(identification(xy,h,W))) label="WATER"
if(!is.null(identification(xy,h,U))) label="UNIVERSITY"
if(!is.null(identification(xy,h,C))) label="CHURCH"

Let is use colored rectangle to make sure it works

 for(i in 1:(nx-1)){
     for(j in 1:(ny-1)){
         if(lb=="HOUSE") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="grey")
         if(lb=="PARK") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="green")
         if(lb=="WATER") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="blue")
         if(lb=="CHURCH") rect(vx[i]-h,vy[j]-h,vx[i]+h,vy[j]+h,col="purple")      

As a first start, we us agree that it works. To use pics, I did borrow them from https://fontawesome.com/. For instance, we can have a tree

 tree <- readPNG("tree.png")

Unfortunatly, the color is not good (it is black), but that’s easy to fix using the RGB decomposition of that package


We can do the same for houses, churches and water actually

water <- readPNG("angle-double-up.png")
 home <- readPNG("home.png")
 church <- readPNG("church.png")

and that’s almost it. We can then add it on the map

 for(i in 1:(nx-1)){
   for(j in 1:(ny-1)){
     if(lb=="HOUSE")  rasterImage(rev_home,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="PARK") rasterImage(rev_tree,vx[i]-h*.9,vy[j]-h*.8,vx[i]+h*.9,vy[j]+h*.8)
     if(lb=="WATER") rasterImage(rev_water,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)
     if(lb=="CHURCH") rasterImage(rev_church,vx[i]-h*.8,vy[j]-h*.8,vx[i]+h*.8,vy[j]+h*.8)     

Nice, isn’t it? (as least as a first draft, done during the lunch break of the R conference in Budapest, today).


Reading text automatically

It is now very easy to read (automatically) some text that can be found in a pdf file. For instance, consider the program of the conference we had yesterday – and today – in Rennes

> library(pdftools)
> scan_pdf <- pdf_text("http://crem.univ-rennes1.fr/Documents/Docs_sem_divers/2017_03_10-11_JJD/JDD_prog.pdf")
> cat(scan_pdf)
Journées Jeunes Docteurs
Programme du jeudi 9 mars 2017
Faculty of Economics - Rennes - Amphi Henri Krier
9h- 9h30 - Accueil
9h30-10h15 :      Présentation du CREM, de la faculté et des activités de recherche liées du ou laboratoire
10h15-10h50 :     Emmanuel LORENZON (Université de Bordeaux, GREThA)
Collusion with a rent seeking agency in sponsored search auctions
10h50-11h25 :     Julien BERTHOUMIEU (Université de Bordeaux, GREThA)
The Impact of “At-the-Border” and “Behind-the-Border” Policies on Cost-Reducing Research
and Development
Co-écrit avec Antoine Bouët

(etc). As you can see, it is working well, even in French, where we have those weird letters (with accents). Here, it is working well because the pdf is vectorized, i.e. it was generated properly, by open office.

But sometimes, we can have only a scanned version of a letter

or just a picture with some typed text. I will not mention hand-writing because it is much more complex.

The other day, my friend Fleur did show me a picture, and some very simple lines of code,

> library('tesseract')
> pic1="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2017/03/pic1.png"
> text_fr <- ocr(pic1, engine = tesseract("fra"))
> cat(text_fr)
Près de 14.400 décès

Si [épidémie de grippe est un phénomène récurrent. celle de
2016—2017 présente plusieurs spécificités. outre sa virulence :
une survenue plus précoce que d‘habitude. une activité
modérée en médecine ambulatoire. mais un impact fort en
milieu hospitalier.

It looks like we’ve be able to extract typed text from a picture ! I want to check. I have to admit, first of all that installation on a linux machine is tricky: one has to install first leptonica, and then follow some guidelines to install tesseract (see also Artem‘s advices). It took me some time, but I’ve been able to install the package.

The first important step, it to train the algorithm with some texts in French (because it is in French in my picture)

> library('tesseract')
> tesseract_download("fra")

Then, I did try with the picture that Fleur did send me (the picture was inserted in the core of the message)

> pic2="https://f-origin.hypotheses.org/wp-content/blogs.dir/253/files/2017/03/pic2.png"
> text_fr <- ocr(pic2, engine = tesseract("fra"))
> cat(text_fr)
Près de 14400 decès

s. mm….agw«… ………«…m ……
a……u…u Dhs—ur; ;pmum…. ;: u……
… W»: »… w…q…na… … ……
…… ………u…_ …… mm……
…… nwm/u

… … mm…—mg…»— sa…… su a.…….…
: :mmræwesdæ ; ; m…decnflwtülflws WWW…
un»… M on m…… . … … .. m...… wma:
.,… … V, …… … …;………yg…gn…
…… pe- le…samemeuuœwpwv m…

Clearly, something went wrong here. When I got that output, I thought that I did not train properly the function. But it was not the answer. As described in that post (in French) it is necessary to have a clean picture, to read it properly

And actually, if we zoom in our picture – the first one, used by Fleur, to show me that package – we have

while for the second one – with a lower resolution – we have

It is necessary to have a scan of a typed text with high resolution… And you have to admit that it is awesome….

The good thing is that I have to work with a judge, in France, to assess quality of experts. And since most of the reports are typed, and then scanned, I am glad to have such a function. I just have to make sure that the resolution is high enough…