Tag Archives: PCA

Principal Component Analysis: A Generalized Gini Approach

Our paper, with Stéphane Mussard and Téa Ouraga, entitle Principal Component Analysis: A Generalized Gini Approach is finally out in the European Journal of Operations Research.

A principal component analysis based on the generalized Gini correlation index is proposed (Gini PCA). The Gini PCA generalizes the standard PCA based on the variance. It is shown, in the Gaussian case, that the standard PCA is equivalent to the Gini PCA. It is also proven that the dimensionality reduction based on the generalized Gini correlation matrix, that relies on city-block distances, is robust to outliers. Monte Carlo simulations and an application on cars data (with outliers) show the robustness of the Gini PCA and provide different interpretations of the results compared with the variance PCA.

ACT6100, analyse non-supervisée

On avance tranquillement dans le cours ACT6100 d’analyse des données en actuariat. Les supports de cours sont en ligne sur https://github.com/freakonometrics/ACT6100. Cette session (mais aussi celle d’hiver) étant en distanciel, le cours est asynchrone, et je poste régulièrement des capsules vidéos. Les capsules présentant les principales méthodes d’analyse non-supervisée sont en ligne

  1. ACP (1) (PCAvideo pdf (48:42)
  2. ACP (2) video pdf (31:10)
  3. ACP (3) video pdf (47:51)
  4. ACP (4) video pdf (39:59)
  5. ACP (5) video pdf (28:53)
  6. CA (1) video pdf (38:33)
  7. CA (2) video pdf (46:51)
  8. MCA (1) video pdf (28:03)
  9. Clusters (k-means) video pdf (48:20)
  10. Clusters (CAH) video pdf (37:38)
  11. NA & Imputations (k-nn) video pdf (17:47)
  12. NA & Imputations (ACP) video pdf (15:28)

Si les liens des vidéos ne marchent pas, je renvoie vers l’ensemble des capsules du cours, ici.

INF7100, statistiques

La seconde partie de mon intervention sur la science des données, dans le cadre du cours INF7100 portera sur les statistiques, univariées et multivarirées. Le plan sera le suivant

  • 201: De la Statistique aux Sciences de Données pdf video (14:24)
  • 211: Fonctions Usuelles en Statistique (fonction de réparition, densité, histogramme) pdf video (28:37)
  • 221: Indicateurs Statistique: Valeur Centrale (moyenne) pdf video (32:56)
  • 222: Indicateurs Statistique: Dispersion (variance, inégalités) pdf video (22:21)
  • 223: Indicateurs Statistique: Approximations (approximation normale) pdf video (18:42)
  • 224: Indicateurs Statistique: Quantiles pdf video (24:54)
  • 231: Inférence (statistique bayésienne) pdf video (39:33)
  • 241: Tests Statistiques (1) (tests, significativité, p-value) pdf video (43:41)
  • 242: Tests Statistiques (2) (erreurs) pdf video (16:51)
  • 261: Statistiques Bivariées pdf video (25:16)
  • 271: Statistiques Multivariées: Projections pdf video (29:06)
  • 272: Statistiques Multivariées: Clusters pdf video (32:21)
  • 281: Réseaux et Graphs pdf video (32:40)
  • 291: Données Chronologiques pdf video (29:01)

 

Principal Component Analysis: A Generalized Gini Approach

With Stéphane Mussard and Téa Ouraga, we recently uploaded on arxiv a paper Principal Component Analysis: A Generalized Gini Approach,

A principal component analysis based on the generalized Gini correlation index is provided. It is proven that the reduction dimensionality based on the generalized Gini correlation index, that relies on city-block distances, is robust to outliers.

Some codes are also available on a dedicated github repo.

Analyse des Correspondances et ACP

Considérons les données suivantes, correspondant à des professions au sein d’un couple. On dispose du tableau de contingence suivant

> base=read.table(
"http://freakonometrics.free.fr/epoux.csv",
sep=";",header=TRUE)
> rownames(base)=base$Nom

Classiquement, avec ce genre de données, on est habitué à utiliser la distance du chi-deux, et les contributions au chi-deux pour voir les modalités qui correspondent fortement

> M=base[1:9,2:10]
> CT=chisq.test(M)

On peut le visualiser de la manière suivante,

> mosaicplot(t(M),main="",las=2,shade=TRUE)

avec les maris en ligne, et les épouses en colonnes. Les contribution importantes sont en bleu ou rouge foncé, les couleurs correspondant respectivement à un lien ‘positif’ (probabilité jointe plus importante que le cas indépendant) ou un lien ‘négatif’ (probabilité jointe moins importante que le cas indépendant)

ou dans l’autre sens

> mosaicplot(M,main="",las=2,shade=TRUE)

mais avec la même conclusion que précédemment: il y a de fortes valeurs bleues sur la diagonale.

Autrement dit, les couples sont relativement homogènes en terme de professions.

En analyse des correspondance, on va regarder notre tableau de contingence, en ligne ou en colonne. Par exemple, on peut définir les profils lignes, qui sont des vecteurs de probabilité \boldsymbol{L}_i=\left(\frac{p_{i1}}{p_{i\cdot}},\cdots,\frac{p_{iJ}}{p_{i\cdot}}\right)\in\mathbb{R}^J

> N=M 
> L=N/apply(N,1,sum)

En notant \boldsymbol{D}_I=\text{diag}(p_{1\cdot},\cdots,p_{I\cdot}), on peut écrire \boldsymbol{L}=\boldsymbol{D}_I^{-1}\boldsymbol{N}Le barycentre de nos vecteurs de lignes est ici\overline{\boldsymbol{L}}=\left(p_{\cdot 1},\cdots,p_{\cdot J}\right)\in\mathbb{R}^J

Là encore, en notant \boldsymbol{D}_J=\text{diag}(p_{\cdot 1},\cdots,p_{\cdot J}), on peut écrire ce barycentre de manière matricielle, \overline{\boldsymbol{L}}=\mathbb{I}_{IJ}\boldsymbol{D}_J.\boldsymbol{L}_{0,i}=\boldsymbol{L}_i-\overline{\boldsymbol{L}}

> Lbar=apply(N,2,sum)/sum(N)
> matL0=t(t(L)-Lbar)

A chaque point, on a associé la fréquence (relative) comme poids, p_{i\cdot}, ce qui correspond à utiliser la matrice \boldsymbol{D}_I. Pour mesurer la distance entre deux points, on va pondérer la distance euclidienne par l’inverse des des probabilités, p_{\cdot j}, ce qui revient à utiliser la matrice \boldsymbol{D}_J^{-1}. La distance entre deux lignes estd(\boldsymbol{L}_{i_1},\boldsymbol{L}_{i_2})=\sum_{j=1}^J\frac{1}{p_{\cdot j}}\left(\frac{p_{i_1j}}{p_{i_1\cdot}}-\frac{p_{i_2j}}{p_{i_2\cdot}}\right)^2

On va alors faire une analyse en composantes principales avec ces différents poids. D’un point de vue matriciel, on va alors l’analyse spectrale de\boldsymbol{H}=\boldsymbol{D}_J^{-1/2}\boldsymbol{L}_0^{\text{T}}\boldsymbol{D}_I\boldsymbol{L}_0\boldsymbol{D}_J^{-1/2}

En particulier, on note \boldsymbol{U}_1,\cdots,\boldsymbol{U}_k les vecteurs propres, et on défini les composantes principales\boldsymbol{C}=\boldsymbol{L}_0\boldsymbol{D}_J^{-1/2}\boldsymbol{U}

La projection sur les deux premières composantes des lignes donne ici

> library(FactoMineR)
> acpL=PCA(matL0,scale.unit=FALSE,
+          row.w=(apply(N,1,sum)),
+          col.w=1/(apply(N,2,sum)))

L’idée est de visualiser le nuage des individus correspondants aux lignes. Dans un second temps, on va faire exactement pareil, mais sur les colonnes\boldsymbol{C}_j=\left(\frac{p_{1j}}{p_{\cdot j}},\cdots,\frac{p_{jI}}{p_{\cdot j}}\right)\in\mathbb{R}^I

> C=t(t(N)/apply(N,2,sum))

Le barycentre est ici\overline{\boldsymbol{C}}=\left(p_{1\cdot},\cdots,p_{I\cdot}\right)\in\mathbb{R}^I

> Cbar=apply(N,1,sum)/sum(N)
> matC0=C-Cbar

et on peut alors faire une analyse en composante principale

> acpC=PCA(t(matC0),scale.unit=FALSE,
+          row.w=(apply(N,2,sum)),
+          col.w=1/(apply(N,1,sum)))

ce qui donne, en regardant le nuage des individus,

La magie de l’analyse des correspondances est qu’on ‘peut’ représenter les deux projections des individus sur le même plan,

> matC=acpC$ind$coord
> matL=acpL$ind$coord
> plot(matC[,1:2],col="red",xlim=c(-.002,.015))
> text(matC[,1:2],rownames(matC),col="red",pos=4)
> points(matL[,1:2],col="blue")
> text(matL[,1:2],rownames(matL),col="blue")

C’est exactement ce que fait la fonction

> afc=CA(N)

Note: en réalité, si on regarde les coordonnées des lignes par ACP on obtient

acpL$ind$coord
                    Dim.1     Dim.2     Dim.3     Dim.4     Dim.5
Agriculteur (M) -0.000056 -0.000414  0.002940  0.000150  0.000011
Artisan (M)     -0.000014  0.000152 -0.000057  0.000323  0.001279
Cadres (M)      -0.000262  0.001734 -0.000015  0.000359 -0.000219
Prof_Int (M)    -0.000194  0.000450 -0.000003 -0.000504  0.000041
Employe (M)     -0.000104 -0.000317 -0.000083 -0.000188  0.000011
Ouvrier (M)     -0.000052 -0.000764 -0.000080  0.000133 -0.000141
Retraite (M)     0.007771  0.000502 -0.000001 -0.000089 -0.000065
Inactif (M)     -0.000018 -0.000150 -0.000163  0.000751 -0.000007
Inconnu(M)       0.000261 -0.000492 -0.000554  0.000398  0.000648

alors que

CA(M, graph = FALSE)$row$coord
                 Dim 1  Dim 2  Dim 3  Dim 4  Dim 5
Agriculteur (M) -0.028 -0.209  1.481  0.076  0.005
Artisan (M)     -0.007  0.077 -0.029  0.163  0.644
Cadres (M)      -0.132  0.873 -0.007  0.181 -0.110
Prof_Int (M)    -0.098  0.227 -0.001 -0.254  0.020
Employe (M)     -0.053 -0.160 -0.042 -0.095  0.006
Ouvrier (M)     -0.026 -0.385 -0.040  0.067 -0.071
Retraite (M)     3.914  0.253 -0.001 -0.045 -0.033
Inactif (M)     -0.009 -0.075 -0.082  0.378 -0.004
Inconnu(M)       0.132 -0.248 -0.279  0.201  0.327

ce qui donne un ratio constant,

acpL$ind$coord/CA(M, graph = FALSE)$row$coord
                      Dim.1       Dim.2       Dim.3       Dim.4       Dim.5
Agriculteur (M) 0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Artisan (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Cadres (M)      0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Prof_Int (M)    0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Employe (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Ouvrier (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Retraite (M)    0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Inactif (M)     0.001985182 0.001985182 0.001985182 0.001985182 0.001985182
Inconnu(M)      0.001985182 0.001985182 0.001985182 0.001985182 0.001985182

qui correspond à 1/\sqrt{n}

1/sqrt(sum(M))
[1] 0.001985182

Visualising a Classification in High Dimension

So far, when discussing classification, we’ve been playing on my toy-dataset (actually, I should no claim it’s mine, it is inspired by the one used in the introduction of Boosting, by Robert Schapire and Yoav Freund). But in ral life, there are more observations, and more explanatory variables.With more than two explanatory variables, it starts to be more complicated to visualise. For instance, consider

MYOCARDE=read.table(
"http://freakonometrics.free.fr/saporta.csv",
head=TRUE,sep=";")

where we have observations from people in E.R., for infarctus, and we want to understand who did survive, to get a predictive model. But before running some classifier, let us visualise our data. Since we have seven explanatory variables and our class (survival or death), we can go for a PCA.

library(FactoMineR) # ACP (sur les var continues)
X=MYOCARDE[,1:7]
acp=PCA(X)

To add the death/survival variable, treat it as numerical 0/1 variable (at least to get a direction)

MYOCARDE2=MYOCARDE
MYOCARDE2$PRONO=(MYOCARDE2$PRONO=="SURVIE")*1
acp=PCA(MYOCARDE2,quanti.sup=8,graph=TRUE)

The nice thing is that we see here where variables are colinear with that one. It is also possible to visualise individuals, and classes, too

acp=PCA(MYOCARDE,quali.sup=8,graph=TRUE)
plot(acp, habillage = 8,col.hab=c("red","blue"))

Continue reading Visualising a Classification in High Dimension

Statistical Interests in Large Cities

I always thought that there were some kind of schools in statistics, areas (not to say universities or laboratories) where people had common interest in term of statistical methodology. Like people with strong interest in extreme values, or in Lévy Processes. I wanted to check this point so I did extract information about articles puslished in about 35 journals in statistics, probability and econometrics. I got all the information in files extracted from http://scopus.com/

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> z=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ z=c(z,as.character(B$Source.title))
+ }

Here is the list of the publications I have used

> Z=sort(table(z),decreasing=TRUE)
> Z[1:34]
                                 Computational Statistics and Data Analysis 
                                                                       4000 
                                           Journal of Multivariate Analysis 
                                                                       4000 
                                                         Econometric Theory 
                                                                       2631 
                                              Annals of Applied Probability 
                                                                       2051 
                                                             Bioinformatics 
                                                                       2000 
                                                                 Biometrika 
                                                                       2000 
                                                    Journal of Econometrics 
                                                                       2000 
                              Journal of Statistical Planning and Inference 
                                                                       2000 
                            Journal of the American Statistical Association 
                                                                       2000 
                                                        Operations Research 
                                                                       2000 
                                                        Pattern Recognition 
                                                                       2000 
                                      Probability Theory and Related Fields 
                                                                       2000 
                                                          Signal Processing 
                                                                       2000 
                                             Journal of Applied Probability 
                                                                       1999 
                                Stochastic Processes and their Applications 
                                                                       1999 
                         Annals of the Institute of Statistical Mathematics 
                                                                       1985 
                                                       Annals of Statistics 
                                                                       1797 
                                                              Technometrics 
                                                                       1446 
                                       Journal of Machine Learning Research 
                                                                       1441 
                                                              Biostatistics 
                                                                       1120 
                                         Statistics and Probability Letters 
                                                                       1062 
                                                      Annals of Probability 
                                                                       1054 
                                                   Statistics and Computing 
                                                                        927 
                                            Advances in Applied Probability 
                                                                        895 
                                        Journal of Nonparametric Statistics 
                                                                        836 
                                                   Computational Statistics 
                                                                        813 
                                            Journal of Time Series Analysis 
                                                                        811 
                          Journal of Computational and Graphical Statistics 
                                                                        802 
     Journal of the Royal Statistical Society. Series C: Applied Statistics 
                                                                        794 
Journal of the Royal Statistical Society. Series B: Statistical Methodology 
                                                                        793 
                                                                 Biometrics 
                                                                        784 
                                                           Machine Learning 
                                                                        559 
                                                  SIAM Journal on Computing 
                                                                        433 
                                     International Journal of Biostatistics 
                                                                        368

The first problem is that is it difficult to extract universities and locations of contributors. When you look at what we have in the dataset, here it is

> B$Authors.with.affiliations[1]
[1] Mischler, S., CEREMADE, UMR CNRS 7534, Universit\303\251 Paris-Dauphine, Place du 
Mar\303\251chal de Lattre de Tassigny, Paris Cedex 16, 75775, France; Mouhot, C., DPMMS,
Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge, 
CB3 0WA, United Kingdom; Wennberg, B., Department of Mathematical Sciences, Chalmers 
University of Technology, G\303\266teborg, Sweden, Department of Mathematical Sciences, 
University of Gothenburg, G\303\266teborg, 41296, Sweden

The first step was to split all that sentence, based on the comma operator

> setwd("/home/arthur/Documents/scopus/")
> L=list.files()
> v=NULL
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ v=c(v,x2[[1]])}
+ }

I have a long  long vector here. Which contains a lot of things !

> V=sort(table(v),decreasing=TRUE)
> names(V)[1:40]
 [1] " United States"                           
 [2] " Department of Statistics"                
 [3] " Department of Mathematics"               
 [4] " M."                                      
 [5] " J."                                      
 [6] " A."                                      
 [7] " S."                                      
 [8] " United Kingdom"                          
 [9] " France"                                  
[10] " D."                                      
[11] " P."                                      
[12] " Y."                                      
[13] " R."                                      
[14] " China"                                   
[15] " H."                                      
[16] " Germany"                                 
[17] " Department of Economics"                 
[18] " C."                                      
[19] " G."                                      
[20] " L."                                      
[21] " Canada"                                  
[22] " T."                                      
[23] " University of California"                
[24] " Department of Biostatistics"             
[25] " F."                                      
[26] " B."                                      
[27] " Department of Mathematics and Statistics"
[28] " E."                                      
[29] " K."                                      
[30] " N."                                      
[31] " Department of Computer Science"          
[32] " Japan"                                   
[33] " Australia"                               
[34] " X."                                      
[35] " Hong Kong"                               
[36] " Italy"                                   
[37] " W."                                      
[38] " Spain"

 

A lot of useless information, for sure, but also more valuable information. Like university names,

> names(V)[c(23,50,58,59,61,66,67,72,84,87,89)]
 [1] " University of California"     " Stanford University"         
 [3] " Chapel Hill"                  " University of Washington"    
 [5] " Stanford"                     " University of Michigan"      
 [7] " Carnegie Mellon University"   " Columbia University"         
 [9] " Cornell University"           " University of North Carolina"
[11] " Duke University"

or cities,

> names(V)[c(35,40,41,44,45,47,51,53,54,55,56,62,64,65,
+ 70,71,82,92,97)]
 [1] " Hong Kong"    " New York"     " Berkeley"     " Cambridge"   
 [5] " Boston"       " Seattle"      " London"       " Pittsburgh"  
 [9] " Los Angeles"  " Singapore"    " Beijing"      " Philadelphia"
[13] " Ann Arbor"    " Atlanta"      " Toronto"      " Baltimore"   
[17] " Chicago"      " San Diego"    " Tokyo"

I decided to focus on 90 locations. Each time I have a string which is the same as the name of one of my 90 cities, I keep it. So if there is a Prof. Ann Arbor, I will consider that person as a city. Here is the graph of all locations, with the number of “articles“. Or contributors. If four people in San Francisco published toegher an article, the article appears four times in my dataset. I did spend some time with Cambridge, and I decided to move Cambridge, MA to Boston, MA. Just for convenience.

> require("geosphere")
> require("maps")
> data(world.cities)
> data(us.cities)
> data(canada.cities)
> LOCALIZE=Vectorize(function(v){z=findLatLon(v)$latlon;if(is.na(z)){z=c(NA,NA)};return(z)})
> CITIES=names(V)[city]
> NCITIES=substr(CITIES,2,nchar(CITIES))
> NCITIES[substr(NCITIES,1,5)=="Paris"]="Paris"
> NCITIES=unique(NCITIES)
> LC=matrix(unlist(LOCALIZE(NCITIES)),nrow=2)
> BASELOC=data.frame(CITY=NCITIES,LAT=LC[2,],LON=LC[1,])

I did spend some time on some cities, such as Paris, or London, where zip code was sometimes attached to the city name. I also had to fix some problems… But after a few minuts, I was able to locate those cities.

Then, I wanted to extract information about all publications. Keywords are interesting, but over 266,567 “publications“, it is hard to use (sometimes it is not file, somethimes it is extremely general, or extremely specialized). So I decided to extract words from the title of the contribution.

> VCITY=NULL
> VKW=NULL
> VY=NULL
> VJ=NULL
> VA=NULL
> VW=NULL
> art=0
> for(i in 1:length(L)){
+ B=read.csv(L[i])
+ A=B$Authors.with.affiliations
+ for(j in 1:length(A)){
+ art=art+1
+ x1=as.character(A[j])
+ x2=strsplit(x1,",")
+ listu=which(x2[[1]]%in%CITIES)
+ if(length(listu)>0){
+ C=tolower(paste(" ",as.character(B[j,"Title"]),sep=""))
+ x3=strsplit(C," ")[[1]]
+ kx3=which(!x3%in%c("a","the","of","an","in","",
+ "for","and","with","on","to","using","from","under"))
+ x3=x3[kx3]
+ J=as.character(B[j,"Source.title"])
+ Y=B[j,"Year"]
+ n1=length(listu)
+ n2=length(x3)
+ VCITY=c(VCITY,rep(x2[[1]][listu],each=n2))
+ VKW=c(VKW,rep(x3,n1))
+ VY=c(VY,rep(Y,n1*n2))
+ VJ=c(VJ,rep(J,n1*n2))
+ VA=c(VA,rep(art,n1*n2))
+ VW=c(vW,rep(1/n2,n1*n2))
+ }}}
­> BASEUNIV=data.frame(CITY=NCITIES,KEYW=VKW,YEAR=VY,JOURNAL=VJ,INDICE=VA,W=W)
Here, I got a huge dataset. One line is one city and one "word". Now, let us select one word, and let us plot how important that word is, in each city,
> Figure=function(keyword="bayesian"){
+ SBASEUNIV=BASEUNIV[BASEUNIV$KEYW==keyword,]
+ SB2=tapply(SBASEUNIV$W,SBASEUNIV$CITY,sum)
+ D=data.frame(CITY=names(SB2),CT=as.vector(SB2))
+ BASE=merge(BASELOC,D)
+ library(maps)
+ library(RColorBrewer)
+ CL=brewer.pal(6, "RdBu")
+ Y=SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
+ X=cut(Y,breaks=c(0,.5,.75,1,1.333,2,10000))
+ levels(X)=1:6
+ library(maps)
+ map("world")
+ points(BASE$LON,BASE$LAT,pch=1,col=CL[as.numeric(X)],
+ cex=sqrt(Y*20),lwd=4)
+ }
In the code above, we compare with the independent case (if cities and keywords where independent) since we normalize using
SB2/SB*sum(SB,na.rm=TRUE)/sum(SB2,na.rm=TRUE)
For bayesian statistics (publication with the word bayesian in the title)

For nonparametric statistics (publication with the word nonparametric in the title)

For stochastic processes (publication with the word processes in the title)

(the problem here is that we cannot visualize the red circles: if in a city, no one published on a given topic, it would be strong red, but tiny, or even null… so we won’t see it). It decided to keep the top 250 words that appeaared in titles, I removed standard common words, such as it, theof, etc.

> listewords=names(sort(table(BASEUNIV$KEYW),decreasing=TRUE)[1:250])
> listewords=listewords[-c(1,2,3,4,7,15,24,42,129)]
> idx=which(BASEUNIV$KEYW%in%listemots)
> T=table(as.character(BASEUNIV$KEYW[idx]),BASEUNIV$CITY[idx])
> MATRICE=as.matrix(T)

I had a nice contingency table, with 90 cities, versus 200 words.

> library("FactoMineR")
> res.pca = PCA(t(MATRICE), scale.unit=TRUE, ncp=5, 
+ graph=FALSE)
> plot.PCA(res.pca, axes=c(1, 2), choix="ind")

Principal component analysis was disapointing,

So I decided to extract, per city, the largest contributions to the chi-square distance

> K2=chisq.test(MATRICE)
> M2=K2$expected

On the graph below, the green level is the theoretical counts of each word, under some independence assumption. The dark line is the observed one. For instance, in San Francisco, on top, we have words that were not used a lot (e.g. processes: given the total number of publications, it would make sense to have 6 or 7 publications with the word processes, but there were 0 publications actually), and below words that were intensively used. Intensively (such as method and structure, the last one was expected two or three times, but it appeared in 25 publications) compared with the other cities,

In Boston, MA,  we got

In New York City, NY

In Paris (France),

But to be honest, I was disapointed. I mean, yes, I can see on the previous graph, for instance, that there are a lot of people working on stochastic processes, with the words Brownian and Markov. But for most cases, I can hardly get an interpretation…

I tried a finaly graph, on interconnexions between authors. The first point is that it is common to have joint publications with colleagues, in the same city. The largest the point, the more joint papers,

But we can add here cross publications: the thinner the line, the less joint publications between two places,

We can see that I missed in the first part the Cambridge-Boston distinction, since Cambridge should now stand for Cambridge, UK. But the line is clearly too large to be explained here by collaboration betweem Cambridge, UK, and Boston, MA. But still. a lot of them can be explained, with Hong-Kong and Shanghai, or Mexico and Guanajuato.

If someone has better ideas to import properly the locations (or affiliations, it might be fun to focus on universities) and perhaps the abstract (more than the title), I’d be glad to try the same study in Economic journals…

Think academic journals look the same ? Well, some do…

We have seen yesterday that finding an optimal strategy to publish is not that simple. And actually, it can be even more difficult in the case the journal rejects the paper (not because it is not correct, but because “it does not fit” with the standards, the quality of the journal, the audience, the editor’s mood, or whatever). The author has basically two choices,

  • forget about the article and move to something else (e.g. start a blog where he/she will be the author and the editor)
  • pretend that the article is worth publishing and then try to find another journal with similar interests


But this last choice is not that easy, since sometimes the author think that this journal was indeed the one that should publish it (e.g. all the articles on the subject have been published in that journal).
So I was wondering if there were clusters of journals, i.e. journals that publish almost the same kind of articles (so that next time one of my paper is rejected by the editor, I just go to for some journal in the same cluster).
So what I did is extremely simple: I looked at articles titles and looked for correlations between words frequency (I could have done that in key words, but I am not a big fan of those key words). I looked at 35 journals (that are somehow related to my areas of interest) and looked at titles of all articles published over the last 20 years. Then I kept the top 1000 of words, and I removed standard short words (“a“, “the“, “is“, etc). Actually, my top words looks like

"models" "model" "data" "estimation" "analysis" "time" 
"processes" "risk" "random" "stochastic" "regression" 
"market" "approach" "optimal" "based" "information" 
"evidence" "linear" "games" "bayesian" "theory" "effects"
"distribution" "multivariate" "tests" "markets" "markov"
"equilibrium" "dynamic" "process" "distributions" 
"application" "stock" "likelihood"

Then, I ran a principal component analysis on my dataset (containing 960 variables – here words – and 35 observations – here journal names).

library("FactoMineR")
res.pca = PCA(MATRICE, scale.unit=TRUE, ncp=5, 
graph=FALSE)
plot.PCA(res.pca, axes=c(1, 2), choix="ind")

The projection of the journals on the first two axis looks like that

Here, we can clearly observe some clusters : on the up-left Journal of Finance and Journal of Banking and Finance (say financial journals) on the top-right Biometrika, Biometrics, Computational Statistics and Data Analysis and Journal of Econometrics (JASA is not far away, i.e. applied statistics journal). And below, on the right, Stochastic Processes and their Applications, Annals of Applied Probability, Journal of Applied Probability, Annals of Probability, Proceedings of AMS and Topology and Applications (ie more theoretical journal).
Note that the projection is rather robust: if I consider my first 200 words, the graph is the same

In order to go further in the interpretation, we can also plot variables, i.e. words from titles,

where we cannot distinguish anything. So if I just look at my top 30, here they are,

On top left we see market(s), risk or information; on top right analysis, effects, models or tests; while below we see Markov or process(es). And we can observe interesting facts: in finance in statistics, we talk about dynamics while in theoretical (mathematical) journal it is about processes.
But the goal was to find cluster, i.e. classes of journals that publish papers with similar titles.

DISTANCE = dist(MATRICE)
cah = hclust(DISTANCE) 
plot(cah)

Here we have

If some classes a rather natural (Journal of Applied Proba. and Advances in Applied Proba.or Economic Theory, Journal of Economic Theory and Journal of Mathematical Economics) some strong correlation are not simple to understand, (e.g. Insurance: Mathematics and Economics and Management Science or Annals of Statistics and the Journal of Multivariate Analysis).
Again, it might be possible to spend hours on the graphs, but if I want – someday – to submit something to one of those journals, I guess I have to stop here, and move to something else…