Tag Archives: database

Names in the U.S., from James Smith to Jose Rodriguez

Two weeks ago, @mona published an interesting post on her blog, about a difficult question, What’s The Most Common Name In America? There were stats about first names, in the U.S., and last names, too. Those informations are – somehow – easy to get. But usually, it is more complicated to get the first and the last name together. For confidentiality issues ! Datasets – the ones I deal with – are supposed to be anonymized, so I never see the first and the last names.  In a previous post, a few years ago, I did mention the so-called Social Security Death Master File. In that file, we have Social Security numbers, with the date of birth, the date of death as well as the first and the last name. So I did use those files to get stats about the first and the last names of American citizens. Of course, it is very restrictive. I have only U.S. citizens that have a Social Security number (which is not compulsary in the U.S. as far as I understood) and who passed away (as mentioned in the name of the dataset: the death master file). Another great thing about that dataset is that I have the date of birth, so I can look at some cohort effect (see opendata.stackexchange for an interesting discussion on that dataset).

Continue reading Names in the U.S., from James Smith to Jose Rodriguez

How to import some parts of a large database

In the introduction of Computational Actuarial Science with R, there was a short paragraph on how could we import only some parts of a large database, by selecting specific variables. The trick was to use the following

> read.table.select.columns=function(datatablename,
I,sep=";"){
+ datanc=read.table(datatablename,header=TRUE,
sep=sep,skip=0,nrows=1)
+ mycols=rep("NULL",ncol(datanc))
+ names(mycols)=names(datanc)
+ mycols[I]=NA
+ datat=read.table(datatablename,header=TRUE,
sep=sep,colClasses=mycols)
+ return(datat)}

For instance, if we use the same dataset as in the introduction, we can import only two variables of interest,

> loc="http://myweb.fsu.edu/jelsner/extspace/extremedatasince1899.csv"
> dt1=read.table.select.columns(loc,c("Region",
"Wmax"),sep=",")
> head(dt1,10)
    Region      Wmax
1    Basin 105.56342
2    Basin  40.00000
3    Basin  35.41822
4    Basin  51.06743
5  Florida  87.34328
6    Basin  96.64138
7     Gulf  35.41822
8       US  35.41822
9       US  87.34328
10      US 106.35318
> dim(dt1)
[1] 2100    2

Continue reading How to import some parts of a large database

Evolution of the number of natural catastrophes

Recently (here and there) I mentioned Reinsurers comments on the increase on the number of catastrophes. But actually, there are other sources of information about catastrophes. For instance the http://www.cred.be/, Centre for Research on the Epidemiology of Disasters. As the mention it below, there has been an increase in the number of natural catastrophes. For a disaster to be entered into the so-called EM-database at least one of the following criteria must be fulfilled:

• Ten (10) or more people reported killed.
• Hundred (100) or more people reported affected.
• Declaration of a state of emergency.
• Call for international assistance.

Note that it is possible to download data (here), so I guess I will work further with that database. To be continued…