Tag Archives: Google

Big data, the tech giants, and insurance

A few months ago, I published a short article, Big data, the tech giants, and insurance, in the Annales de Mines. The original article was in French, but the Editors shared an English version,

Technology and insurance companies seem like polar opposites in every possible way. The tech giants, agile and fast-acting, are obsessed with the future, whereas insurers, conservative and reflexive, are fascinated with the data that the tech giants collect. However these two sectors are now eyeing each other and have started forming partnerships as they come to understand that, in both cases, their core business is data.

to be continued…

Big Data, GAFA et Assurance

Les sociétés technologiques et le monde de l’assurance auraient tout pour être opposé. Agilité, rapidité, obsession du futur chez les uns, conservatisme, réflexivité, fascination pour les données passées chez les autres. Et pourtant les deux s’observent, et commencent à nouer des partenariats, comprenant que la donnée est leur cœur de métier.

Continue reading Big Data, GAFA et Assurance

Mensualiser une série hebdomadaire

Pour le devoir de série temporelles, les données fournies par Google trend sont hebdomadaires, ce qui peut rendre la modélisation compliquée. Comme on l’a évoqué en cours, il peut être plus simple de travailler sur des données mensuelles. La petite fonction suivante permet de transformer les données pour avoir des données mensuelles (avec des moyennes par mois, ce qui fait qu’un mois de 28 jours et un mois de 31 jours sont comparables). Histoire d’illustrer, prenons un mot au hasard, disons…. épilation. On commence par sauver le fichier csv, et on le lit sous R,

EPILATION=read.table("/home/charpentier/report-epilation.csv", skip=4,header=TRUE,sep=",",nrows=426)

La petite fonction suivant va nous aider à convertir la base en une série temporelle mensuelle,

H2M=function(BASE){ 
X=BASE[,2] 
date1=substr(as.character(BASE$Semaine),1,10) 
date2=substr(as.character(BASE$Semaine),14,23) 
D1=as.Date(date1,"%Y-%m-%d") 
D2=as.Date(date2,"%Y-%m-%d") 
vm=vy=N=NA 
for(t in 1:length(D1)){ 
mois=seq(D1[t],D2[t],length=7) 
vm=c(vm,as.POSIXlt(mois)$mon+1) 
vy=c(vy,as.POSIXlt(mois)$year+1900) 
N=c(N,rep(X[t],7))} N=N[-1]; 
vm=vm[-1]; 
vy=vy[-1] 
YM=vy*100+vm 
Z=tapply(N,as.factor(YM),mean) 
Zts=ts(Z,start=c(2004,1),frequency=12) 
return(Zts)}

Si on travaille sur la série hebdomadaire, on a la série suivante

hebdo=ts(EPILATION$épilation,start=2004, frequency=52)

La fonction d’autocorrélation est alors

acf(hebdo,160)

Maintenant, on peut travailler sur les données mensuelles. On utilise alors

mensuel=H2M(EPILATION)

Le graphique est alors le suivant

La fonction d’autocorrélation est cette fois la suivante

acf(mensuel,40)

On retrouve le comportement cyclique, avec une saisonnalité annuelle.

Google, et prévision

Pour le second devoir du cours ACT6420 (modèles de prévisions), le but est de prévoir des recherches sur Google, via https://www.google.com/trends/. Soit vous avez un mot clé qui vous intéresse, soit vous allez chercher une série extraite dans /ACT6420-TS/. Dans les bases qui ont été mises en ligne, il y a le mot clé gym, par exemple.

> report=read.table(
+ "http://freakonometrics.free.fr/ACT6420-TS/report-GYM.csv",
+ skip=4,header=TRUE,sep=",",nrows=548)

On va nettoyer un peu la base (en particulier les valeurs manquantes de la fin)

> tail(report)
                    Semaine gym
543 2014-05-25 - 2014-05-31  80
544 2014-06-01 - 2014-06-07  80
545 2014-06-08 - 2014-06-14  78
546 2014-06-15 - 2014-06-21  NA
547 2014-06-22 - 2014-06-28  NA
548 2014-06-29 - 2014-07-05  NA
> report=report[!is.na(report[,2]),]
> tail(report)
                    Semaine gym
540 2014-05-04 - 2014-05-10  79
541 2014-05-11 - 2014-05-17  80
542 2014-05-18 - 2014-05-24  79
543 2014-05-25 - 2014-05-31  80
544 2014-06-01 - 2014-06-07  80
545 2014-06-08 - 2014-06-14  78

Les données sont ici hebdomadaires, comme on peut le voir graphiquement

> hebdo=ts(report$gym,start=2004,frequency=52)
> hebdo
Time Series:
Start = c(2004, 1) 
End = c(2014, 25) 
Frequency = 52 
  [1]  68  60  60  53  55  50  49  49  51  48  48  45  45
 [14]  47  42  48  46  47  46  47  47  46  47  46  45  46
 [27]  46  50  48  48  52  51  57  55  53  56  55  50  48
 [40]  50  46  49  46  48  49  48  46  50  47  46  43  54
 [53]  69  64  63  60  57  57  53  54  55  54  50  53  54
 [66]  46  50  50  49  49  49  47  49  48  49  50  49  49
 [79]  47  47  50  49  52  51  55  55  55  54  54  52  53
 [92]  54  53  52  51  51  50  51  48  52  50  47  45  56
[105]  76  72  66  64  63  59  53  56  57  58  54  55  54
[118]  53  52  52  50  53  50  51  49  51  51  50  50  48
[131]  48  53  52  56  58  57  60  62  62  63  59  58  58
[144]  56  54  54  53  53  54  53  50  55  54  53  51  56
[157]  77  73  68  68  67  66  64  67  64  63  63  63  62
[170]  62  61  62  61  63  62  63  63  63  63  59  58  59
[183]  61  60  60  58  61  61  62  62  64  68  66  61  58
[196]  58  55  54  51  55  54  55  53  55  53  52  50  55
[209]  76  77  68  67  64  64  58  61  59  59  57  55  57
[222]  59  57  54  56  55  54  52  52  53  54  53  55  55
[235]  55  57  54  56  55  58  65  63  64  67  66  63  62
[248]  60  60  57  55  56  56  58  58  53  56  55  54  52
[261]  69  77  71  68  66  63  60  60  62  59  59  57  57
[274]  60  58  58  56  54  58  57  56  57  57  57  57  54
[287]  54  55  57  57  56  64  60  59  62  62  64  59  58
[300]  57  57  54  53  52  53  53  55  53  55  53  50  49
[313]  63  76  73  68  65  66  60  61  61  60  58  58  61
[326]  61  62  57  57  58  55  58  57  58  59  57  55  55
[339]  57  57  58  59  59  60  60  63  63  63  66  65  62
[352]  60  59  58  57  56  58  59  56  54  57  54  54  53
[365]  66  87  77  74  72  69  68  64  65  65  68  63  65
[378]  65  65  62  61  62  62  63  63  61  65  63  64  63
[391]  61  62  62  61  62  65  63  67  67  71  74  71  70
[404]  68  68  65  66  64  65  68  64  64  65  62  61  60
[417]  69  91  88  83  81  78  75  71  73  74  73  70  69
[430]  66  68  69  66  68  68  65  69  66  69  70  69  70
[443]  69  72  72  71  69  76  74  72  77  77  82  78  72
[456]  72  69  68  67  67  64  66  66  63  65  64  62  61
[469]  67  88  90  83  81  83  77  76  76  74  75  74  74
[482]  77  77  77  73  72  76  72  71  72  74  72  74  73
[495]  73  76  73  73  71  76  76  79  79  83  83  81  78
[508]  78  76  78  80  74  73  75  74  75  72  71  70  69
[521]  73  92 100  94  93  91  86  84  84  85  85  83  82
[534]  83  83  78  79  80  80  79  80  79  80  80  78

> plot(hebdo)

Pour avoir des modèles plus simples, on peut agréger les données, pour les rendre mensuelles (par interpolation linéaire). La fonction à utiliser est ici

        H2M=function(BASE){
 	X=BASE[,2]
 	Y=BASE[,1]
 	date1=substr(as.character(Y),1,10)
 	date2=substr(as.character(Y),14,23)
 	D1=as.Date(date1,"%Y-%m-%d")
 	D2=as.Date(date2,"%Y-%m-%d")
 	vm=vy=N=NA
 	for(t in 1:length(D1)){
 		mois=seq(D1[t],D2[t],length=7)
 		vm=c(vm,as.POSIXlt(mois)$mon+1)
 		vy=c(vy,as.POSIXlt(mois)$year+1900)
 		N=c(N,rep(X[t],7))}
 	N=N[-1]; vm=vm[-1]; vy=vy[-1]
 	YM=vy*100+vm
 	Z=tapply(N,as.factor(YM),mean)
 	Zts=ts(as.numeric(Z),start=c(2004,1),frequency=12)
 	return(Zts)}

Si on utilise cette fonction sur nos données, on obtient

> mensuel=H2M(report)
> mensuel
          Jan      Feb      Mar      Apr      May      Jun
2004 60.25000 50.75862 47.51613 45.66667 46.67742 46.00000
2005 63.22581 55.10714 53.03226 49.10000 48.45161 49.10000
2006 68.87097 57.10714 55.51613 51.86667 50.70968 49.93333
2007 70.74194 65.57143 62.87097 61.60000 62.77419 59.96667
2008 70.45161 60.79310 57.19355 56.13333 52.96774 54.30000
2009 70.35484 61.25000 58.19355 57.13333 56.80645 56.00000
2010 69.87097 61.78571 59.45161 58.76667 57.16129 56.40000
2011 76.58065 66.21429 65.22581 62.66667 62.51613 63.16667
2012 85.00000 73.82759 69.93548 67.76667 67.25806 69.46667
2013 84.93548 76.39286 75.00000 74.80000 72.67742 73.13333
2014 94.29032 84.96429 83.29032 79.80000 79.54839 79.00000
          Jul      Aug      Sep      Oct      Nov      Dec
2004 47.80645 53.67742 52.63333 47.77419 47.96667 47.61290
2005 48.41935 53.51613 53.43333 52.41935 50.20000 49.74194
2006 52.48387 59.77419 59.66667 54.12903 52.86667 54.35484
2007 59.87097 62.03226 63.10000 54.45161 54.30000 54.09677
2008 55.45161 62.16129 63.96667 57.41935 56.23333 56.09677
2009 55.96774 61.12903 60.16667 54.29032 53.60000 53.35484
2010 58.12903 61.64516 63.43333 57.67742 56.73333 56.48387
2011 61.80645 66.22581 70.86667 65.77419 65.23333 63.19355
2012 71.48387 75.06452 75.80000 67.22581 64.90000 65.12903
2013 73.51613 78.93548 79.73333 76.41935 73.93333 72.80645
2014                                                      
> ts.plot(mensuel)

Cette base est maintenant utilisable pour le devoir. Le but est ici de faire de la prévision pour les 2 prochaines années, avec un intervalle de confiance. Mais j’en reparlerais par courriel.

Visualizing densities of spatial processes

We recently uploaded on http://hal.archives-ouvertes.fr/hal-00725090 a revised version of our work, with Ewen Gallic (a.k.a. @3wen) on Visualizing spatial processes using Ripley’s correction: an application to bodily-injury car accident location

In this paper, we investigate (and extend) Ripley’s circumference method to correct bias of density estimation of edges (or frontiers) of regions. The idea of the method was theoretical and di#cult to implement. We provide a simple technique – based of properties of Gaussian kernels – to compute e#efficiently weights to correct border bias on frontiers of the region of interest, with an automatic selection of an optimal radius for the method. An illustration on location of bodily-injury car accident (and hot spots) in the western part of France is discussed, where a lot of accident occur close to large cities, next to the sea.

Sketches of the R code can be found in the paper, to produce maps, an to describe the impact of our boundary correction. For instance, in Finistère, the distribution of car accident is the following (with a standard kernel on the left, and with correction on the right), with 186 claims (involving bodily injury)

and in Morbihan with 180 claims, observed in a specific year (2008 as far as I remember),

The code is the same as the one mentioned last year, except perhaps plotting functions. First, one needs to defi
ne a color scale and associated breaks

breaks <- seq( min( result $ZNA , na.rm = TRUE ) * 0.95 , max ( result$ZNA , na.rm = TRUE ) * 1.05 , length = 21)
col <- rev( heat . colors (20) )

to
finally plot the estimation

image . plot ( result $X, result $Y, result $ZNA , xlim = range (pol[,
1]) , ylim = range (pol[, 2]) , breaks = breaks , col = col ,
xlab = "", ylab = "", xaxt = "n", yaxt = "n", bty = "n",
zlim = range ( breaks ), horizontal = TRUE )

It is possible to add a contour, the observations, and the border of the polygon

contour ( result $X, result $Y, result $ZNA , add = TRUE , col = "grey ")
points (X[, 1], X[, 2], pch = 19, cex = 0.2 , col = " dodger blue")
polygon (pol , lwd = 2)

Now, if one wants to improve the aesthetics of the map, by adding a Google Maps base map, the
first thing to do – after loading ggmap package – is to get the base map

theMap <- get_map( location =c( left =min (pol [ ,1]) , bottom =min (pol[ ,2]) , right =max (pol [ ,1]) , 
top =max (pol [ ,2])), source =" google ", messaging =F, color ="bw")

Of course, data need to be put in the right format

getMelt <- function ( smoothed ){
res <- melt ( smoothed $ZNA)
res [ ,1] <- smoothed $X[res [ ,1]]
res [ ,2] <- smoothed $Y[res [ ,2]]
names (res) <- list ("X","Y","ZNA")
return (res )
}
smCont <- getMelt ( result )

Breaks and labels should be prepared

theLabels <- round (breaks ,2)
indLabels <- floor (seq (1, length ( theLabels ),length .out =5)) 
indLabels [ length ( indLabels )] <- length ( theLabels ) 
theLabels <- as. character ( theLabels [ indLabels ])
theLabels [ theLabels =="0"] <- " 0.00 "

Now, the map can be built

P <- ggmap ( theMap )
P <- P + geom _ point (aes(x=X, y=Y, col=ZNA), alpha =.3 , data =
smCont [!is.na( smCont $ZNA ) ,], na.rm= TRUE )

It is possible to add a contour

P <- P + geom _ contour ( data = smCont [!is.na( smCont $ZNA) ,] ,aes(x=
X, y=Y, z=ZNA ), alpha =0.5 , colour =" white ")

and colors need to be updated

P <- P + scale _ colour _ gradient ( name ="", low=" yellow ", high ="
red", breaks = breaks [ indLabels ], limits = range ( breaks ),
labels = theLabels )

To remove the axis legends and labels, the theme should be updated

P <- P + theme ( panel . grid . minor = element _ line ( colour =NA), panel
. grid . minor = element _ line ( colour =NA), panel . background =
element _ rect ( fill =NA , colour =NA), axis . text .x= element _ blank() ,
axis . text .y= element _ blank () , axis . ticks .x= element _ blank() ,
axis . ticks .y= element _ blank () , axis . title = element _ blank() , rect = element _ blank ())

The
final step, in order to draw the border of the polygon

polDF <- data . frame (pol)
colnames ( polDF ) <- list ("lon","lat")
(P <- P + geom _ polygon ( data =polDF , mapping =( aes(x=lon , y=lat)), colour =" black ", fill =NA))

Then, we’ve applied that methodology to estimate the road network density in those two regions, in order to understand if high intensity means that it is a dangerous area, or if it simply because there is a lot of traffic (more traffic, more accident),

We have been using the dataset obtained from the Geofabrik website which provides
Open-StreetMap data. Each observation is a section of a road, and contains a few points identifi
ed by their geographical coordinates that allow to draw lines. We have use those points to estimate a proxy of road intensity, with weight going from 10 (highways) to 1 (service roads).

splitroad <- function ( listroad , h = 0.0025) {
pts = NULL
weights <- types . weights [ match ( unique ( listroad $ type ), types .
weights $ type ), " weight "]
for (i in 1:( length ( listroad ) - 1)) {
d = diag (as. matrix ( dist ( listroad [[i]]))[, 2: nrow ( listroad
[[i ]]) ]))
}}
return (pts )
}

See Ewen’s blog for more details on the code, http://editerna.free.fr/blog/…. Note that Ewen did publish a poster of the paper (in French), for the http://r2013-lyon.sciencesconf.org/ conference, that will be organized in Lyon on June 27th-28th, see

All comments are welcome…

Rationality, and MS Excel (and other calculators)

This morning, Mathieu had a nice experience in his course on computational method in actuarial science. But let us start with some mathematical formal definitions.

First, recall that https://latex.codecogs.com/gif.latex?y^x is – somehow – a standard expression. No one should be surprised to see such an expression. Generally (as explained in http://en.wikipedia.org/… ), this function is defined only when https://latex.codecogs.com/gif.latex?y\in\mathbb{R}_+. The idea is that the definition of https://latex.codecogs.com/gif.latex?y^x is that

https://latex.codecogs.com/gif.latex?y^x%20=%20\exp\left(x\log[y]\right)

And it is a definition. Such a function exists only if https://latex.codecogs.com/gif.latex?y\in\mathbb{R}_+ (maybe excluding https://latex.codecogs.com/gif.latex?0). This would be a standard definition in real-analysis.

Now, this ‘power’ function appears also in complex analysis, when dealing with unit roots. From instance, if  https://latex.codecogs.com/gif.latex?z=y^{\frac{1}{k}}e^{i%20\frac{2n\pi}{k}}, where https://latex.codecogs.com/gif.latex?y\in\mathbb{R}_+ and https://latex.codecogs.com/gif.latex?k\in\mathbb{N}_\star, for some https://latex.codecogs.com/gif.latex?n\in\mathbb{N}, then https://latex.codecogs.com/gif.latex?z^k=y. Thus, in complex-analysis it might be more complex to define properly https://latex.codecogs.com/gif.latex?y^x since it might not be unique. But we can relate (sometimes, when https://latex.codecogs.com/gif.latex?x is the inverse of an integer, or maybe a rational number ?) with roots of polynomial functions. So far, nothing new…

Let us get back to Mathieu’s problem. Actually, in his course, he wanted to compute https://latex.codecogs.com/gif.latex?(-8)^{\frac{1}{3}}. With a French version of Excel, entering

you do get https://latex.codecogs.com/gif.latex?-2. If you look at the ‘help’ window, you have some more details

It looks like this hat function can be used to define objects such as https://latex.codecogs.com/gif.latex?y^x. But with

you get

(meaning that this is a problem…). It is also possible to use the power (puissance in French) function of Excel,

Here, you also get

The weird part here is that, in the ‘help’ window, you can read that this power function can be used with any number in https://latex.codecogs.com/gif.latex?\mathbb{R}.

Another point… what about  ? Somehow, it is just the square of the previous one (with the fraction)… Here, typing

you get

(similarly with the power function). So clearly, it is not that simple to use this power function. Now, if you use Google (which is now my new online calculator when I am in class, when I cannot use R), if the power is a fraction (or to be more specific the inverse of an integer), then it works as Excel

 

you get

 But if you type (which should be close, from a continuity property of the power function)

 

you get

and similarly

On Wolfram Mathworld, enter

Mathematica does recognize that we try to deal with unit roots: the result is here

with – as expected – a numerical approximation

With Matlab, Mathieu did obtain the same as Mathematica (its decimal approximation). And to conclude, with R, Mathieu did obtain

> (-8)^(1/3)
[1] NaN
> (-8)^(.333333333333333)
[1] NaN

So for R, you cannot use this hat function on negative numbers.

Now, how can we interpret those outputs ?

1) My understanding is that clearly, with MS Excel, https://latex.codecogs.com/gif.latex?x^{ab}\neq%20\left(x^a\right)^bsince

https://latex.codecogs.com/gif.latex?(-8)^{\frac{2}{3}}\neq%20\left((-8)^{\frac{1}{3}}\right)^2

which is problematic. For instance, in insurance, with monthly discounts, we do have functions like https://latex.codecogs.com/gif.latex?u^{\frac{k}{12}}. What if

https://latex.codecogs.com/gif.latex?u^{\frac{k}{12}}\neq%20\left(u^{\frac{1}{12}}\right)^k

2) The problem comes – probably (MS Excel is not an open software, so it might be hard to check) –  from the fact that https://latex.codecogs.com/gif.latex?y^{\frac{1}{n}} is interpreted as an inverse of a (possibly) bijective function. To be more specific, https://latex.codecogs.com/gif.latex?x=y^{\frac{1}{n}} means that https://latex.codecogs.com/gif.latex?x^n=y. When https://latex.codecogs.com/gif.latex?n is an odd integer, then (in real-analysis) there is a unique inverse, and thus, https://latex.codecogs.com/gif.latex?y^{\frac{1}{n}} is uniquely defined, since https://latex.codecogs.com/gif.latex?x\mapsto%20x^n is a bijective https://latex.codecogs.com/gif.latex?\mathbb{R}\rightarrow\mathbb{R} function. This is what MS Excel (and Google) is doing: https://latex.codecogs.com/gif.latex?x\mapsto%20x^3 is a bijective https://latex.codecogs.com/gif.latex?\mathbb{R}\rightarrow\mathbb{R} function, so https://latex.codecogs.com/gif.latex?(-8)^{\frac{1}{3}} means that we need to find the unique (real) value https://latex.codecogs.com/gif.latex?x such that https://latex.codecogs.com/gif.latex?x^3=-8. Thus, somehow, it makes sense to return https://latex.codecogs.com/gif.latex?-2.

3) There is still a problem with Google, and Mathematica. That is fine to return unit roots in https://latex.codecogs.com/gif.latex?\mathbb{C}. But how comes there is only one value ? I mean, yes https://latex.codecogs.com/gif.latex?1+\sqrt{3}%20\%20i is a possible answer, since

https://latex.codecogs.com/gif.latex?(1+\sqrt{3}%20\%20i)^3=-8

but one can also observe that , and similarly, https://latex.codecogs.com/gif.latex?(-2)^3=-8 and

https://latex.codecogs.com/gif.latex?(1-\sqrt{3}%20\%20i)^3=-8

One can check with

With R, since we do not deal with power function here, but with roots, if we want to find https://latex.codecogs.com/gif.latex?x such that https://latex.codecogs.com/gif.latex?x^3=-8, the function is

> polyroot(c(8,0,0,1))
[1]  1+1.732051i -2+0.000000i  1-1.732051i

Which is different… Weird isn’t it ?

De l’hebdomadaire au mensuel

Pour le devoir de série temporelles, les données fournies par Google Insight sont hebdomadaires, ce qui peut rendre la modélisation compliquée. Comme on l’a évoqué en fin de cours, il peut être plus simple de travailler sur des données mensuelles. La petite fonction suivante permet de transformer les données pour avoir des données mensuelles (avec des moyennes par mois, ce qui fait qu’un mois de 28 jours et un mois de 31 jours sont comparables). Histoire d’illustrer, prenons un mot au hasard, disons…. épilation. On commence par sauver le fichier csv, et on le lit sous R,

EPILATION=read.table("/home/charpentier/report-epilation.csv", skip=4,header=TRUE,sep=",",nrows=426)

La petite fonction suivant va nous aider à convertir la base en une série temporelle mensuelle,

H2M=function(BASE){ 
X=BASE[,2] 
date1=substr(as.character(BASE$Semaine),1,10) 
date2=substr(as.character(BASE$Semaine),14,23) 
D1=as.Date(date1,"%Y-%m-%d") 
D2=as.Date(date2,"%Y-%m-%d") 
vm=vy=N=NA 
for(t in 1:length(D1)){ 
mois=seq(D1[t],D2[t],length=7) 
vm=c(vm,as.POSIXlt(mois)$mon+1) 
vy=c(vy,as.POSIXlt(mois)$year+1900) 
N=c(N,rep(X[t],7))} N=N[-1]; 
vm=vm[-1]; 
vy=vy[-1] 
YM=vy*100+vm 
Z=tapply(N,as.factor(YM),mean) 
Zts=ts(Z,start=c(2004,1),frequency=12) 
return(Zts)}

Si on travaille sur la série hebdomadaire, on a la série suivante

hebdo=ts(EPILATION$épilation,start=2004, frequency=52)

La fonction d’autocorrélation est alors

acf(hebdo,160)

Maintenant, on peut travailler sur les données mensuelles. On utilise alors

mensuel=H2M(EPILATION)

Le graphique est alors le suivant

La fonction d’autocorrélation est cette fois la suivante

acf(mensuel,40)

On retrouve le comportement cyclique, avec une saisonnalité annuelle. Mais avec 12 retards, on devrait avoir des modèles plus simples qu’avec 52 retards. Bref, il peut être plus simple de travailler sur des données mensuelles qu’hebdomadaires. Mais chacun est libre de la série qu’il ou elle analysera…

Gold price and fear

Via @theEconomist, I understood that there might be connections between the price of Gold (which is said to be extremely high nowadays) and the VIX SP500 index (the option volatility index, i.e. the so-called “fear index“, as discussed – in French- a few months ago). This has been discussed also on several blogs, e.g. http://etfdailynews.com/ or http://blogs.marketwatch.com/. Via Yahoo quotes, it is possible to get also easily the

SP500 VIX index.

> library(tseries)
> X=get.hist.quote("^VIX")
> T=time(VIX)
> Y=as.POSIXlt(T)$year+1900
> X2011=X[Y==2011,]
> VIX=X2011[,4]
> VIX100=as.numeric(VIX)/VIX[1]*100
> T2011=T[Y==2011]
> plot(T2011,VIX100,lwd=2,col="red",type="l",
+ xlab="",ylab="",ylim=c(60,290))

And a huge xls file can give us the price of gold (on a daily basis). But we can extract only one series (with the price in USD, which is the series of interest here)

> goldprice=read.table(
+ "http://freakonometrics.blog.free.fr/
  public/data/goldpriceUSD.csv",
+ header=TRUE,sep=";",dec=",")
> T=as.Date(goldprice$Name,"%d/%m/%y")
> GP=goldprice$USdollar
> Y=as.POSIXlt(T)$year+1896
> GP2011=GP[Y==2011]
> GP100=GP2011/GP2011[1]*100
> T2011=T[Y==2011]
> lines(T2011-4*365.25,GP100,lwd=2,col="blue")

We can see that scales are quite different on those two series (starting at 100 at the beginning of January 2011),

An alternative might be not to consider the price of gold, but something more psychological, like Internet researches. It is possible to download the csv file for queries on gold price on Google, via google insight.

 
> google=read.table(
+ "http://freakonometrics.blog.free.fr/public/data/google.csv",
+ skip=4,header=TRUE,sep=",",nrows=51)
> W=as.Date(substr(as.character(google$Semaine),1,10))
> G=google$gold.price
> G100=G/G[1]*100
> lines(W,G100,lwd=2,col="blue")

which gives the following graph (again, starting at 100 at the beginning of January 2011),

Here, we can clearly observe that the two series are related, maybe cointegrated. Nice isn’t it ?

Ramadan, entre le sexe et la faim…

Dans mon précédant billet (ici) je volais un dessin de Martin Vidberg qui parlait de google et de statistiques… Google trend est un outil magique pour noter des tendances et faire des statistiques simples et marrantes. Par exemple, puisque le ramadan touche à sa fin, on peut regarder ce qui motivent les algériens pendant le ramadan. Manifestement, le sexe n’est pas une priorité, que ce soit en anglais,

ou en français,

De manière générale, le mot clé le plus tapé sous google a tendance à se faire tout petit pendant la période du ramadan, alias porno,

En fait, on cherche un peu moins les femmes sur le net,

Par contre, visiblement, on a faim, et on veut bien manger: cuisine explose pendant le ramadan,

On peut aussi regarder les autres pays à forte population musulmane, et observer des tendances similaires, comme le Maroc,

les Émirats Arabes Unis,

ou encore le Pakistan,

(dans ce dernier cas, on observe un fort retour du mot clé sexune fois le ramadan passé… une espèce de manque virtuel). Bref, avec Google, on peut vraiment observer des choses intéressantes… non ?