Yesterday evening, I wanted to play with Twitter, and see which websites I was using as references in my tweets, to get a Top 4 list.
The first problem I got was because installing twitteR on Ubuntu is not that simple ! You have to install properly RCurl… But you before install the package in R, it is necessary to run the following line in a terminal
$ sudo apt-get install libcurl4-gnutls-dev
then, launch R
$ R
and then you can run the standard
> install.packages("RCurl")
and install finally the package of interest,
> install.packages("twitteR")
Then, the second problem I had was that twitteR has been updated recently because of Twitter’s new API. Now, you should register on Twitter’s developers webpage, get an Id and a password, then use it in the following function (I did change both of them, below, so if you try to run the following code, you will – probably – get an error message),
> library(twitteR) > cred <- getTwitterOAuth("ikzCtYif9Rwoood45w","rsCCifp99kw5sJfKfOUhhwyVmPl9A") > registerTwitterOAuth(cred) [1] TRUE > T <- userTimeline('freakonometrics',n=5000)
you should also go on some webpage and enter a PIN that you find online.
To enable the connection, please direct your web browser to: http://api.twitter.com/oauth/authorize?oauth_token=cQaDmxGe... When complete, record the PIN given to you and provide it here:
It is a pain in ass, trust me. Anyway, I have be able to run it. I can now have the list with all my (recent) tweets
> T <- userTimeline('freakonometrics',n=5000)
Now, my (third) problem was to extract from my tweets the url of references. The second tweet of the list was
- [textmining] “How a Computer Program Helped Reveal J. K. Rowling as Author of A Cuckoo’s Calling” scientificamerican.com/article.cfm?id… by @garethideas
But when you look at the text, you see
> T[[2]] [1] "freakonometrics: [textmining] \"How a Computer Program Helped Reveal J. K. Rowling as Author of A Cuckoos Calling\" https://t.co/wdmBGL8cmj by @garethideas"
So what I get is not the url used in my tweet, but a shortcut to the urls, from https://t.co/. Hopefully, @3wen (as always) has been able to help me with the following functions,
> extraire <- function(entree,motif){ + res <- regexec(motif,entree) + if(length(res[[1]])==2){ + debut <- (res[[1]])[2] + fin <- debut+(attr(res[[1]],"match.length"))[2]-1 + return(substr(entree,debut,fin)) + }else return(NA)} > unshorten <- function(url){ + uri <- getURL(url, header=TRUE, nobody=TRUE, followlocation=FALSE, + cainfo = system.file("CurlSSL", "cacert.pem", package = "RCurl")) + res <- try(extraire(uri,"\r\nlocation: (.*?)\r\nserver")) + return(res)}
Now, if we use those functions, we can get the true url,
> url <- "https://t.co/wdmBGL8cmj" > unshorten(url) [1] http://www.scientificamerican.com/article.cfm?id=how-a-computer-program-helped-show..
Now I can play with my list, to extract urls, and the address of the website,
> exturl <- function(i){ + text_tw <- T_text[i] + locunshort2 <- NULL + indtext <- which(substr(unlist(strsplit(text_tw, " ")),1,4)=="http") + if(length(indtext)>0){ + loc <- unlist(strsplit(text_tw, " "))[indtext] + locunshort=unshorten(loc) + if(is.na(locunshort)==FALSE){ + locunshort2 <- unlist(strsplit(locunshort, "/"))[3]}} + return(locunshort2)}
Using apply with this function, and my list, and counting using a simple table() function, I can see that my top four (over more than 900 tweets) of reference websites is the following:
www.nytimes.com www.guardian.co.uk 19 21 www.washingtonpost.com www.lemonde.fr 21 22
Nice, isn’t it ?
OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (August 26, 2013). R, Twitter and URLs. Freakonometrics. Retrieved September 11, 2024 from https://doi.org/10.58079/our0
This is exactly what I’ve been looking for but I am going to use cvs file not Twitter list. Any suggestions?
This might be a basic questions but I started learning R recently..
Bonjour,
Quelques petites coquille se sont glissées dans le code :
#les modifications a apportées sont :
}else {return(NA)}}
T$text[i]
Au final, lorsque j’utilise la commande lapply(T$text,exturl) tout est renvoyé en NULL. 🙁 ( je suis novice sur R, je ne code pratiquement que sur SAS ) Pourriez vous eclairez ma lanterne?
Bien a vous .
tu es bien connecté sur ton compte twitter ? c’est surprenant… je vais regarder quand mon emploi du temps se sera un peu vidé.
Bonjour,
Merci pour votre retour. Oui je suis bien connecté sur mon compte twitter. et la commande T$text, me renvoie bien tout les twittes. Seulement lorsque je tape exturl(35) (35 etant le 35 eme twittes contenant une http/t.co) il me renvoie bien le unshorten url. Mais au niveau du lapply tout les twittes renvoyés sont NULL même ceux contenant une http/t.co.
Merci encore.
I would sugest:
(1) Rather than using
cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”))
in getURL, it might be better to set globally (resulting in fewer extra lines of code in larger programmes because you don’t have to explicitly use the cainfo parameter anymore):
require(RCurl)
options(RCurlOptions = list(cainfo = system.file(“CurlSSL”, “cacert.pem”, package = “RCurl”)))
# rest of your code
(2) To extract the a URL from a tweet, one method might be:
gsub(“.*(http://t.co/%5B%5B:alnum:%5D%5D+).*”, “\\1”, tweet)
# send to unshorten
There should be double square brackets on both sides of :alnum: but for some reason they were stripped out when I posted the above.
Nice! Thanks.
Or you can use httr, which automatically sets the cainfo for you (as well as other common settings), and also provides the HEAD function.
Indeed ! http://cran.r-project.org/web/packages/httr/httr.pdf thanks for the tip (and your work on that package) Sounds awesome… but please, give me a few days to play with it !
Hi Arthur, i was wondering if you could help me to understand, in tweeters developers website in this new OAUTH processes , what to insert into “website”? (after the application name & description), because I’m just tiring to get tweets into R and not into a website.
thank a lot.
Hey couldnt you simply used gsub or grep, to remove everything that does not start with http or basically is not an URLs? . Agreed not very elegant but I guess number of lines of code would be less.
What I get from my list are URLs,, but you’re right, I could have wrote a more elegant code.