Tag Archives: snow

Snow in Montréal (Canada)

Winter started a bit more than one month ago… but we have already experienced many snow storms… there is still a lot snow in gardens and in the streets,

I was wondering if it was that unusual, but apparently not. Compared with last year, it is (for the first months of winter, until the end of Januray), it +50%, but it is comparable with previous years

Yes, we a simple loop, we can easily extract data from official wesite https://climat.meteo.gc.ca/ (but not too far away, even 2015 contains a lot of missing observations). For this month, we use

url = "https://climat.meteo.gc.ca/climate_data/daily_data_f.html?StationID=51157&timeframe=2&StartYear=1840&EndYear=2023&Day=30&Year=2023&Month=1#"
library(XML)
library(stringr)
download.file(url,destfile = "M.html")
tables=readHTMLTable("M.html")
k = which(tables[[1]]$`JOUR `=="Somme")
neige = tables[[1]]$`Neige tot. Definitioncm `[k]
x = as.numeric(sub(",", ".", strsplit(neige, "LegendCarer")[[1]][1], fixed = TRUE))

and then we loop, and store the number we look for in a data frame (yes, we have to convert “50,8LegendCarer^” into the appropriate numerical value (that would be here 50.8

D = data.frame(annee = c(2023,rep(2022:2015,each=12),c(12,11,10)), mois= c(1,rep(12:1,8),12,11,10), lab = neige, snow = x)
for(i in 2:nrow(D)){
    y = D$annee[i]
    m = D$mois[i]
    url = paste("https://climat.meteo.gc.ca/climate_data/daily_data_f.html?StationID=51157&timeframe=2&StartYear=1840&EndYear=2023&Day=30&Year=",y,"&Month=",m,"#",sep="")
  download.file(url,destfile = "M.html")
  tables=readHTMLTable("M.html")
  k = which(tables[[1]]$`JOUR `=="Somme")
  neige = tables[[1]]$`Neige tot. Definitioncm `[k]
  x = as.numeric(sub(",", ".", strsplit(neige, "LegendCarer")[[1]][1], fixed = TRUE))
  D[i,3] = neige
  D[i,4] = x
}

Here are the most recent months

> head(D)
  annee mois              lab snpw
1  2023    1 50,8LegendCarer^ 50.8
2  2022   12             63,0 63.0
3  2022   11             14,6 14.6
4  2022   10              0,0  0.0
5  2022    9              0,0  0.0
6  2022    8              0,0  0.0

Of course, we need some codes to plot, we here, I mainly wanted to keep tracks of the code used to extract meteorological data…

 

Interactive Maps for John Snow’s Cholera Data

This week, in Istanbul, for the second training on data science, we’ve been discussing classification and regression models, but also visualisation. Including maps. And we did have a brief introduction to the  leaflet package,

devtools::install_github("rstudio/leaflet")
require(leaflet)

To see what can be done with that package, we will use one more time the John Snow’s cholera dataset, discussed in previous posts (one to get a visualisation on a google map background, and the second one on an openstreetmap background),

library(sp)
library(rgdal)
library(maptools)
setwd("/cholera/")
deaths <- readShapePoints("Cholera_Deaths")
df_deaths <- data.frame(deaths@coords)
coordinates(df_deaths)=~coords.x1+coords.x2
proj4string(df_deaths)=CRS("+init=epsg:27700") 
df_deaths = spTransform(df_deaths,CRS("+proj=longlat +datum=WGS84"))
df=data.frame(df_deaths@coords)
lng=df$coords.x1
lat=df$coords.x2

Once installed the leaflet package, we can use the package at the RStudio console (which is what we will do here), or within R Markdown documents, and within Shiny applications. But because of restriction we got on this blog (rules of hypotheses.org) So there will be only copies of my screen. But if you run the code, in RStudio you will get interactvive maps in the viewer window.

First step. To load a map, centered initially in London, use

m = leaflet()%>% addTiles() 
m %>% fitBounds(-.141,  51.511, -.133, 51.516)

In the viewer window of RStudio, it is just like on OpenStreetMap, e.g. we can zoom-in, or zoom-out (with the standard + and – in the top left corner)

And we can add additional material, such as the location of the deaths from cholera (since we now have the same coordinate representation system here)

rd=.5
op=.8
clr="blue"
m = leaflet() %>% addTiles()
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr)

We can also add some heatmap.

X=cbind(lng,lat)
kde2d <- bkde2D(X, bandwidth=c(bw.ucv(X[,1]),bw.ucv(X[,2])))

But there is no heatmap function (so far) so we have to do it manually,

x=kde2d$x1
y=kde2d$x2
z=kde2d$fhat
CL=contourLines(x , y , z)

We have now a list that contains lists of polygons corresponding to isodensity curves. To visualise of of then, use

m = leaflet() %>% addTiles() 
m %>% addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Of course, we can get at the same time the points and the polygon

m = leaflet() %>% addTiles() 
m %>% addCircles(lng,lat, radius = rd,opacity=op,col=clr) %>%
  addPolygons(CL[[5]]$x,CL[[5]]$y,fillColor = "red", stroke = FALSE)

Continue reading Interactive Maps for John Snow’s Cholera Data

Winter came

Generated with pictures from http://nohrsc.noaa.gov/nh_snowcover/

data can be downloaded from ftp://sidads.colorado.edu/DATASETS/NOAA/G02158

See also the Global Sea Ice Area time series,

> d=read.table("http://arctic.atmos.uiuc.edu/cryosphere/timeseries.global.anom.1979-2008")
> tail(d)
            V1        V2       V3       V4
12778 2013.984 0.7913005 18.42697 17.63567
12779 2013.986 0.8523080 18.39049 17.53818
12780 2013.989 0.8819072 18.30466 17.42275
12781 2013.992 1.0200537 18.33854 17.31848
12782 2013.995 1.0612829 18.27418 17.21289
12783 2013.997 1.0163171 18.14861 17.13230
> plot(d$V1,d$V3,type="l",ylab="Global Sea Ice Area, million km2")

Are parallel computations worth it ?

Yesterday, Daniel Marcelino published an interesting post on his blog, untitled Parallel Processing: When does it worth ? I was asking myself the same question for a chapter I am currently writing. And I did like his approach, so I tried, on my computer to do the same. I did use three packages to run parallel R codes,

> library(multicore)
> library(snow)
> library(snowfall)

and one to quantify time to run the code

> library(microbenchmark)

I ran the code on my mac, at the office,

> all=detectCores(all.tests=TRUE)
> all
[1] 4

which is a standard computer, with four cores. To run some codes, I had to generate datasets. Here, I consider a data frame, with https://latex.codecogs.com/gif.latex?n rows, and 100 columns. I generate values using a Gaussian distribution,

> gen=function(n) data.frame(matrix(rnorm(n*100),n,100))

The goal, here, will be to compute quantiles (or to be more specific quartiles) per column, and to replicate that 100 times. Here, the standard technique is to use lapply. But two (at least) parallel version of the function can be found. So, let us use it

> base=gen(n=100)
> microbenchmark(
+ mlapp=data.frame(lapply(base, quantile, probs = 1:3/4 )),
+ mclapp=data.frame(mclapply(base, quantile, probs = 1:3/4 , mc.cores = all)),
+ sflapp=data.frame(sfLapply(base, quantile, probs = 1:3/4 )),
+ times=100) -> m

For instance, with 100 rows, we have

> m
Unit: milliseconds
    expr      min       lq   median       uq       max
1 mclapp 50.19290 55.90364 57.99185 64.10619 266.88692
2  mlapp 26.94146 29.49396 31.20571 49.54824  75.60251
3 sflapp 27.54857 30.10224 31.41864 47.10688  59.28925

And with 500,000 rows, we have

> m
Unit: seconds
    expr       min         lq     median        uq      max
1 mclapp 42.999504 103.873919 161.989876 258.66887 660.2953
2  mlapp  3.720542   3.770319   4.070116  11.90181 166.9461
3 sflapp  3.587703   3.770399   4.027876  10.62654 181.0093

So yes, using parallel code would be very interesting ! Especially with very large datasets (I could not run it with 1 million rows). If we consider a loop, to see the evolution of the median time, for each of those three function, we can plot the time it took, as a function of the number of rows,

> i=1;vk=seq(1,6,by=.2)
> col=seq(i,3*2,by=3)
> plot(10^vk,db[2,col],ylim=range(db),col="white",log="x",
+     xlab="Number of rows",ylab="Time")
+ polygon(c(10^vk,rev(10^vk)),c(db[1,col],rev(db[3,col])),col="light blue",border=NA)
+ lines(10^vk,db[2,col],col="blue",lwd=2)

Here, we have the following, with the standard lapply on the left (the line if the median time, with quartiles, 25% and 75%), the multicore function in the middle, and the snowfall function, on the right,

If we zoom in, for small datasets (less than 10,000 rows and 100 columns), we do observe a gain, since the code ran two times faster

So clearly, it might be interesting to write codes to distribute on different cores. But here, I use a simple function (I compute quantiles on columns of a dataset). I should try with a more complex function…

On the other hand, I should mention that, usually, while I have have one (or two) codes running, I can do something else : seeking for recent papers for ongoing research projects, answer to emails that I should have answered a few weeks ago, checking for typos in the book and update the tex file, or type parts of a future posts on my blog, etc. The problem I got yesterday afternoon, when I ran the code, was that suddenly, all the cores on my computer were dedicated to that R code. I could not even finish an email I started before running the code… So finally I left earlier, decided to pick up the kids after school, and went to the park, to enjoy the sunny day we had ! So I have to admit that running parallel codes can have advantages you could not think of !