Tag Archives: insurance

Variance decomposition and price segmentation in Insurance

Today, I was giving a talk at the Economics department, and I got a very interesting question about some tables I keep showing to explain why insurance companies like segmentation. The tables illustrate three different case. Here, S stands for the individual (random) loss.

  • the first one is the case where the premium asked is the same for all the insured – i.e. the pure premium \mathbb{E}[S]

As explain, the loss is here on an individual basis, so, per policy, the insurer faces the (random) loss S-\mathbb{E}[S], which is, on average, null. That’s the second line. For the last line, I keep saying that we look at the overall loss of the insurer, but that’s not correct. Here, with a factor n, we would have the variance of the total loss for the insurance company. We just removed the n factor in the table

  • then, we have perfectly observable heterogeneity : insured have a risk factor \Omega, obervable, and in that case, the ‘pure’ premium is \mathbb{E}[S|\Omega]

That’s what we have below. Here again, on average, the insured should have a null profit. And the total variance (which was \text{Var}[S] in our previous example) is now splitted in two parts (that’s basically Pythagoras theorem).

The interpreration is the following

And then, I usually mention the third and last case, more realistic

  • the risk factor \Omega is not obervable, but segmentation is still possible using some proxy of the risk factor, obtained using some covariates, and the ‘pure’ premium is \mathbb{E}[S|\boldsymbol{X}]

And here also, there is a nice interpretation, because of the variance decomposition : there is one part that we observed previously, with some ‘perfect pricing’ and an additional part (that is positive) that is related to the fact that the covariates are just a proxy of the risk factor….

The term on the left is then a lower bound, obtained if actually, using our covariate, available for the pricing, we can get the risk factor.

That was my story, but the fact that n (the portfolio size) was not mentioned in the tables was a bit confusing… So I decided to create some graphs to illustrate those three cases

  • same premium for everyone

Consider some simple simulations. On the graph on the left, we have on the x-axis the risk factor, and on the y-axis, the loss (going roughly from 0 to 20). The pure premium is the average of those losses. Here, it’s 10. That’s the plain red line (on the left). In the middle, the y-axis is the insured profit/loss per policy. Someone with a loss close to 0 means a gain of 10, someone with a loss close to 20 means a loss of 10. On average, there is no profit (that’s the plain line). And then, on the right, we have the distribution of the profit/loss (per contract). Again, on average it’s 0, with some variance,

  • premium based on covariates

Consider here is simple covariate x : assume here that’s we’ve been able to create a binary variable, that can distinguish the low risks and the high risks. Here, there are two levels for the premium. The low premium is close to 6, and the high one is close to 14. That’s again the graph on the left

Then we have the profit/loss per policy for the insured, in the middle. Here, when the loss was close to 0, the gain is smaller : it is 6 (while it was 10 before). When it was close to 10, previously, it meant a 0 profit, but now it’s either a loss of 4, or a gain of 4. The profit/loss distribution is now on the right. There is less dispersion, and less variance. That the decrease of variance we’ve discussed before. To summarize, segmentation does reduce the variability of the result for the insurance company. That’s what we observe on the right.

  • premium based on the risk factor

Assume now that \Omega is observable. And that we use it for our pricing. The premium is now continuous, and it is the red line, on the left. The profit/loss (in the middle) is the difference between the loss, and its expected value (conditional on the risk factor). And on the right, we have the distribution.

As expected, there is much less variability on the profit/loss distribution of the insurance company in that case. And actually, that’s a lower bound for the variance of result of the insurance company… I hope that the graph clarify what’s going on here…

Networks to reinvent insurance?

The theory of networks, or graphs, was born in 1735, following the work of Leonard Euler, who tried to find a walk – starting from a given point – that would bring us back to that point by passing once and only once through each of the seven bridges in the city of Königsberg. These networks can be compared to metro networks, consisting of stations (nodes), linked between two by rails, or not, or more generally a road network, which can give rise to congestion studies, for example. But today, networks are mainly social, connecting people through friendships, professional, family, or monetary ties. Network analysis makes it possible to create relatively homogeneous communities, accepting to share a risk, recreating a mutualisation.

Network and credit

In genealogy, we will have hierarchical networks, a child being linked to his parents, who are themselves linked to their parents. In sociology, social networks make it possible to analyze the links between individuals (or organizations) within a group. Friendships can be studied in a schoolyard (a link that could be an invitation to a birthday party) or e-mail exchanges in a company (the Enron e-mail database has been widely used, with over 180,000 messages exchanged between 36,000 employees). Figure 1 shows two networks of 20 individuals (A, B, …, T).

 

Figure 1: Random networks, 20 nodes (Watts-Strogatz and Barbasi)

In a Facebook or Linkedin type vision, we will say that E and F are linked, in the sense of “friends”, if there is a segment linking points E and F. A network can be directed, for example if we study the exchange of messages (E wrote to F), or money loans (E lent money to F). If historically only adjacency was studied (existence or not of links), we can now add weights, for example the amount of a financial loan. Babutsidze (2012) thus studies the positions of French and German banks in interbank lending within the European zone (the nodes are then the banks). The study of networks within village communities in developing countries has led to a better understanding of informal finance mechanisms. Banerjee et al (2013) study the dissemination of information in a network, and more particularly microfinance loans.

While networks are useful for better organizing microcredit, CNN noted in 2015 that Facebook allowed credit organizations to use a borrower’s social network to determine whether or not it represents a good credit risk. In particular, if friends’ credit scores were too low, a person could be denied credit. This situation is dangerous because of the particular properties of networks, and more particularly the paradox of friends.

From the very small world to the paradox of friends

In 1929, Frigyes Karinthy hypothesized that any person on earth could be connected to any other person by a succession of individual relationships involving at most 6 links. “We should select anyone from the world’s 1.5 billion people, anyone, anywhere. It seems that, using no more than five individuals, one of whom is a personal acquaintance, he could contact the chosen individuals using nothing other than the network of personal acquaintances. This theory of six handshakes originated in a new literary novel. It will be necessary to wait for the work of Michael Gurevich in the 1960s, then Stanley Milgram ten years later, to see the first attempts to quantify these relationships appear, under the name “Small World Problem”.

While Leskovec & Horvitz (2008) confirmed this order of magnitude, by analyzing several billion messages exchanged using the Windows Live Messenger platform, more recently, Baghat et al (2016) estimated that any two people on Facebook were connected by an average of three and a half people. On the random network on the left, a person has, on average, 2 friends, while a random friend has, on average, 2.25 friends. On the right-hand network, the gap is even greater, because if there too a person has, on average, 2 friends, a random friend will have on average more than 4 friends.

 

Figure 2: Random networks, 500 nodes (Watts-Strogatz and Barbasi)

This paradox, observed in 1991 by sociologist Scott Feld, is very easily demonstrated. Heuristically, we can see a link with the probabilistic property \frac{\mathbb{E}[X^2]}{\mathbb{E}[X]}=\mathbb{E}[X]+\frac{\text{Var}[X]}{\mathbb{E}[X]}>\mathbb{E}[X] where the term on the left is the number of friends of my friends, divided by my number of friends. The difference is all the greater the greater the dispersion of the number of friends. If the left-hand network is very dense, the right-hand network on the other hand has a power law property: the distribution of the number of friends follows a power law (or Zipf law, or Pareto’s law). Figure 3 shows the distribution of the number of friends on a network, in a double logarithmic scale: linearity indicates a distribution according to power. This type of distribution can be found in a very large number of networks, particularly Facebook, as shown by Wohlgemuth & Matache (2014).

 

Figure 3: Distribution of the number of friends on simulated random networks (Watts-Strogatz and Barbasi in red)

The classic interpretation is that some people are central in the network, with a very large number of connections. This property is well known in marketing (we will then speak of a “peer effect“) but it also has impacts in risk management or public health. Chrisakis & Fowler (2010) have shown that influenza epidemics can be detected almost two weeks in advance, by monitoring the infection in a social network. In particular, the analysis of the health of central people in a network is “an ideal way to predict outbreaks, but detailed information doesn’t exist for most groups, and to produce it would be time-consuming and costly”. To return to the example of the credit score, if it is found to be correlated to the number of friends, the friends paradox makes it dangerous to use the friends’ score as an indicator of an individual’s risk!

The importance of homophilia

Another important feature of networks is the notion of homophilia, introduced in 2001 in sociology by two important articles, corresponding to the tendency to be connected to one’s peers. McPherson et al (2001) assumed that similarity generates connection, and therefore people’s personal networks are homogeneous across many socio-demographic, behavioural and intrapersonal characteristics. Moody (2001) studied friendships in elementary school playgrounds in the United States, with a focus on interracial friendships. Easley & Kleinberg (2010) thus presents a number of consequences of homophilia, ranging from the creation of tables at business meals to the granting of credit in the United States. The measurement of homophilia is the same as asking, taking into account pre-existing groups (according to gender, age, socio-professional category, etc.) how the links are distributed, between groups, or within groups.

 

Figure 4: Low homophilia (left) and high homophilia (right)

In an insurance context, an actuary seeks to create tariff classes, groups that are homogeneous in terms of risks, according to explanatory variables (the so-called tariff variables). People who live in the same place, drive the same types of vehicles, and have the same characteristics, are likely to be in the same class. But if homophilia exists in a population, a tariff group could perhaps be observed on a network of friends. Why not then consider creating groups within a network?

Using insurance networks

In this spirit, in 2010, Friendsurance was launched in Germany and has more than 100,000 insured in 2018. In France, a short collaborative insurance experiment was launched in 2015, with Inspeer, offering to share damage insurance deductibles (in car or home insurance) with friends. These types of collaborative insurance, sometimes called peer-to-peer insurance, are based on the formation of small groups by a broker. A portion of the insurance premiums paid is paid to a group fund, the other portion to a third party insurance company. Minor damage suffered by the policyholder is first covered by this group fund. For claims exceeding the deductible, the usual insurer is used. A group can be formed by the insured, forming a social network a bit like Facebook. In this model, the only requirement is that all group members must have the same type of insurance (e. g. liability insurance with legal expenses insurance).

As Schiller (2013) noted, this type of mechanism has many virtues, the first being to reduce costs and the risk of fraud. There is no tendency to cheat on the cost of a claim when the risk is borne by family members or friends. The anonymity of mutuality that exists in the law of large numbers is disappearing. But aren’t we reinventing version 2.0. of the tontine associations, with the strong return of risk sharing within close-knit communities?

References

Joshua Angrist. The perils of peer effects. Labor Economics, 30, 98-108, 2014

Zakaria Babutsidze. Positions of French and German Banks in European interbank lending network. OFCE, Mars 2012.

Abhijit Banerjee, Arun Chandrasekhar, Esther Duflo & Matthew Jackson. Diffusion of Microfinance. Science, 341,

Smriti Bhagat, Moira Burke, Carlos Diuk, Ismail Onur Filiz & Sergey Edunov. Three and a half degrees of separation. Facebook Research, 2016.

Ananya Bhattacharya. Facebook patent: Your friends could help you get a loan – or not. 4 août 2015, CNN,

Nicholas Christakis & James Fowler Social Network Sensors for Early Detection of Contagious Outbreaks. PLoS ONE. 5 (9): e12948. arXiv:1004.4792 2015

David Easley & Jon Kleinberg. Networks, Crowds, and Markets. Cambridge University Press. 2010.

Scott Feld. Why your friends have more friends than you do, American Journal of Sociology, 96 (6): 1464–1477, 1991.

Matthew Jackson. Social and Economic Networks. Princeton University Press, 2010.

Jure Leskovec & Eric Horvitz. Planetary-Scale Views on a Large Instant-Messaging Network. Microsoft Research, 2008.

Miller McPherson, Lynn Smith-Lovin & James Cook. Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology. 27: 415–444, 2001.

James Moody. Race, School Integration, and Friendship Segregation in America. American Journal of Sociology, 107 (3): 679-716, 2001.

Wesley Perkins, Michael Haines & Richard Rice. Misperceiving the college drinking norm and related problems: a nationwide study of exposure to prevention information, perceived norms and student alcohol misuse. Journal of Studies on Alcohol 66 (4) : 470-478, 2005.

Ben Schiller. A Social Network For Insurance That Cuts Costs And Reduces Fraud. Fast Company, October 2013,

Brad Walker. How Peer-to-Peer Companies Are Transforming the Insurance Sector. The Street, Avril 2016,

Jason Wohlgemuth & Mihaela Matache. Small-World Properties of Facebook Group

Networks. Complex Systems, 23 (2014).

[i] Complete data can be downloaded from https://snap.stanford.edu/data/email-Enron.html

[ii] https://www.friendsurance.com/ and https://www.inspeer.me/ respectively

Insurance, Actuarial Science, Data and Models

Our research chaire ACTINFO, with our colleagues from Lyon, at the DAMI chaire,  PREVENT’HORIZON chaire & ACTUARIAT DURABLE chaire, will organize a 2 day conference in Paris, on Insurance, Actuarial Science, Data & Models, in ten days.

We invited Katrien ANTONIO (KU Leuven), Alexandre BOUMEZOUED (Milliman Paris), Alfred GALICHON (New-York University), Pierre-Yves GEOFFARD (Paris School of Economics), Meglena JELEVA (University of Paris Nanterre), Julie JOSSE (Ecole Polytechnique), Florence JUSOT (Paris Dauphine University), Michael LUDKOWSKI (University of California Santa Barbara), François PANNEQUIN (CREST and ENS Paris-Saclay), Florian PELGRIN (Edhec Business School), Dylan POSSAMAI (Columbia University) and Julien TRUFIN (ULB Brussels). More information (including the program) is online.

Insurance: Risk Pooling and Price Segmentation

Talk this afternoon at the seminar of Telecom ParisTech

Insurance is usually defined as “the contribution of the many to the misfortune of the few”. This idea of pooling risks together using the law of large number legitimates the use of  the expected value as actuarial “fair” premium. In the context of heterogeneous risks, nevertheless, it is possible to legitimate price segmentation based on observable characteristics. But nowadays, intensive segmentation can be observed, with a much wider range of offered premium, on a given portfolio. In this talk, we will briefly get back on statistical approaches of insurance pricing (classical econometric tools vs machine learning). We will then get back on recent experiments (so-called “actuarial pricing game”) organized since 2015, where real actuaries are playing in competitive (artificial) market, that mimic real insurance market. We will get back on conclusions obtained on two editions, the first one, and the most recent one, where a dynamic version of the game was launched.

By the way, there will be soon a fourth version of the “Actuarial Pricing Game”… some information soon, on this blog…

R in Insurance, in Paris

The 5th conference on R in Insurance will be organized on Thursday 8 June 2017 at ENSAE , Paris. I will attend the conference and the program is really nice (I was in the scientific committee – with Christophe Dutang, Markus Gesmann, Giorgio Alfredo Spedicato and Andreas Tsanakas – and I have to admit that was received many interesting submissions). Furthermore, the gala dinner will take place at the restaurant of Musée d’Orsay. I really can’t miss it…

R in Insurance, 2017

Following the successfull conferences in London (2013, 2014, 2016) and in Amsterdam (2015), the next edition will take place in Paris. The R in insurance 2017 will take place in ENSAE on June 8.

This one-day conference will focus again on applications in insurance and actuarial science that use R, the lingua franca for statistical computation. The intended audience of the conference includes both academics and practitioners who are active or interested in the applications of R in insurance. The two invited speakers are Katrien Antonio (KU Leuven) and Julie Seguela (Covea). It will be a nice event !

Additional thoughts about ‘Lorenz curves’ to compare models

A few month ago, I did mention a graph, of some so-called Lorenz curves to compare regression models, see e.g. Progressive’s slides (thanks Guillaume for the reference)

The idea is simple. Consider some model for the pure premium (in insurance, it is the quantity that we like to model), i.e. the conditional expected valeur

On some dataset, we have our predictions, as well as observed quantities, . The curve are obtained simply :

  • sort the observations so that

  • based on that ordering (from high risks to low risks, based on our predictions), we plot Lorenz curve

Continue reading Additional thoughts about ‘Lorenz curves’ to compare models

Pricing Game

In November, with Romuald Elie and Jérémie Jakubowicz, we will organize a session during the 100% Actuaires day, in Paris, based on a “pricing game“. We provide two datasets, (motor insurance, third party claims), with 2  years of experience, and 100,000 policies. Each ‘team’ has to submit premium proposal for 36,000 potential insured for the third year (third party, material + bodily injury).

We will work as a ‘price aggergator’ with all the teams, with simple matching rules (the cheapest is chosen, or more complex rules, based on random selection among cheap insurers). The complete description is available on line.

R codes to read the datasets are

> training <- read.csv2(
+ "http://freakonometrics.free.fr/training.csv")
> dim(training)
[1] 100021     20
> pricing <- read.csv2(
+ "http://freakonometrics.free.fr/pricing.csv")
> dim(pricing)
[1] 36311    15

Everyone is invited to play! The more, the merrier….

Computational Actuarial Science, with R

The book Computational Actuarial Science, with R is officially out. In the introduction of the book, and on the website of CRC, it is mentioned that the datasets can be found “in an R package on CRAN“, which is unfortunately incorrect. Some datasets are too large, so the package can not be uploaded on CRAN. Hopefully, Christophe host the package on his website.

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/")

or

> install.packages("CASdatasets", repos = "http://dutangc.free.fr/pub/RRepos/", 
type = "source")

Here are the files :

Insurance datasets

A collection of datasets, originally for the book ‘Computational Actuarial Science with R’ edited by Arthur Charpentier (CAS with R). Now, the package contains a large variety of actuarial datasets.

Version: 0.9-8
Published: 2014-05-21
Author: Christophe Dutang
Maintainer: Christophe Dutang <christophe.dutang at ensimag.fr>
License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)]
NeedsCompilation: no

Downloads:

Reference manual: CASdatasets.pdf
Package source: CASdatasets_0.9-8.tar.gz
Package installation: go to this page
Windows binaries: r-release: CASdatasets_0.9-8.zip
OS X Snow Leopard binaries: r-release: CASdatasets_0.9-8.tgz
OS X Mavericks binaries: r-release: CASdatasets_0.9-8.tgz
Old sources: CASdatasets archive

London, Bayes and the Lloyd’s

Monday, we really had a great conference in London.

It was a great pleasure since I did learn a lot of things. And also a great honor to be the last speaker. Tuesday morning, I wanted to go to Thomas Bayes’grave, which is the the graveyard next to the CASS Business School. I had a good a apriori about where the grave should be,

but to be honest, it was not possible to get close enough to be able to read the name on it (even if I now know that it is the large one in the right lower corner of the picture)

Actually, on the internet, you can find some picture where the stone is clean, so you can learn that the grave is the “cotton” one – at least, you can easily read that name.

It was actualy more simple to see William Blake’s grave, as well as Daniel Defoe’s.

Then, with Leo, we’ve been to the Lloyd’s to see some friends, as well as Richard Rogers’s building.

At the 11th floor, you have a lot of rooms for meetings, as well as old paintings, to tell a bit more about the history of the company,

The building is just amazing. Unfortunately, to get in, there is a dress code. A sort of strict one actually. Leo is working for RBC, so he casually wears a suit. But I don’t. I mean, I did have a shirt, but as someone mentioned, “there is no collar !” (I don’t want to put my friend into trouble for helping me getting in).

So, after going throught the basement, we’ve been able to reach the elevator, and go on top.

The building is not exactly located where Edward Lloyd got his coffee shop (even after moving at the end of 1691 on Lombard street), but the Lloyd’s is still a legend for anyone interested in the history of insurance, and more generally, the history of risk modeling (and management).

R for actuarial science

As mentioned in the Appendix of Modern Actuarial Risk Theory, “R (and S) is the ‘lingua franca’ of data analysis and statistical computing, used in academia, climate research, computer science, bioinformatics, pharmaceutical industry, customer analytics, data mining, finance and by some insurers. Apart from being stable, fast, always up-to-date and very versatile, the chief advantage of R is that it is available to everyone free of charge. It has extensive and powerful graphics abilities, and is developing rapidly, being the statistical tool of choice in many academic environments.

R is based on the S statistical programming language developed by Joe Chambers at Bell labs in the 80’s. To be more specific, R is an open-source implementation of the S language, developed by Robert Gentlemn and Ross Ihaka. It is a vector based language, which makes it extremely interesting for actuarial computations. For instance, consider some Life Tables,

> TD[39:52,]       > TV[39:52,]
     Age    Lx         Age    Lx
  39  38 95237          38 97753
  40  39 94997          39 97648
  41  40 94746          40 97534
  42  41 94476          41 97413
  43  42 94182          42 97282
  44  43 93868          43 97138
  45  44 93515          44 96981
  46  45 93133          45 96810
  47  46 92727          46 96622
  48  47 92295          47 96424
  49  48 91833          48 96218
  50  49 91332          49 95995
  51  50 90778          50 95752
  52  51 90171          51 95488

Those (French) Life Tables can be found here

> TD <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TD8890.csv",sep=";",header=TRUE)
> TV <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",sep=";",header=TRUE)

From those vectors, it is possible to construct the matrix of death probabilities, https://latex.codecogs.com/gif.latex?\boldsymbol{P}=[\text{%20}_{k}p_x], using for instance

>  Lx <- TD$Lx
>  m <- length(Lx)
>  p <- matrix(0,m,m); d <- p
>  for(i in 1:(m-1)){
+  p[1:(m-i),i] <- Lx[1+(i+1):m]/Lx[i+1]
+  d[1:(m-i),i] <- (Lx[(1+i):(m)]-Lx[(1+i):(m)+1])/Lx[i+1]}
>  diag(d[(m-1):1,]) <- 0
>  diag(p[(m-1):1,]) <- 0
>  q <- 1-p

One can compute easily, e.g., the (curtate) expectation of life defined as

https://latex.codecogs.com/gif.latex?e_x%20=\mathbb{E}(K_x)=\sum_{k=1}^\infty%20k\cdot%20\text{%20}_{k|1}q_x%20=%20\sum_{k=1}^\infty%20\text{%20}_{k}p_x

and one can compute the vector of life expectancy, at various ages https://latex.codecogs.com/gif.latex?\boldsymbol{e}=[e_x], as

> life.exp = function(x){sum(p[1:nrow(p),x])}
> e = Vectorize(life.exp)(1:m)

An actually, any kind of actuarial quantity can be derived from those matrices. The expected present value (or actuarial value) of a temporary life annuity-due is, for instance,

https://latex.codecogs.com/gif.latex?\ddot{a}_{x:\overline{n}|}=\sum_{k=0}^{n-1}%20\nu^k%20\cdot%20{}_{k}p_x%20=\frac{1-A_{x:\overline{n}|}}{1-\nu}

The code to compute those functions is here

> for(j in 1:(m-1)){ adots[,j]<-cumsum(1/(1+i)^(0:(m-1))*c(1,p[1:(m-1),j])) }

or consider the expected present value of a term insurance

https://latex.codecogs.com/gif.latex?%20A^1_{x:\overline{n}|}%20=\sum_{k=0}^{n-1}%20\nu^{k+1}%20\cdot%20\text{%20}_{k|}q_x

with the following code

> for(j in 1:(m-1)){ A[,j]<-cumsum(1/(1+i)^(1:m)*d[,j]) }

Some more details can be found in the first part of the notes of the crash courses of last summer, in Meielisalp. Vector – or matrices – are extremely convenient to work with, when dealing with life contingencies. It is also possible to model prospective mortality. Here, the mortality is not only function of the age https://latex.codecogs.com/gif.latex?x, but also time https://latex.codecogs.com/gif.latex?t,

> t(DTF)[1:10,1:10]
    1899  1900  1901  1902  1903  1904  1905  1906  1907  1908
0  64039 61635 56421 53321 52573 54947 50720 53734 47255 46997
1  12119 11293 10293 10616 10251 10514  9340 10262 10104  9517
2   6983  6091  5853  5734  5673  5494  5028  5232  4477  4094
3   4329  3953  3748  3654  3382  3283  3294  3262  2912  2721
4   3220  3063  2936  2710  2500  2360  2381  2505  2213  2078
5   2284  2149  2172  2020  1932  1770  1788  1782  1789  1751
6   1834  1836  1761  1651  1664  1433  1448  1517  1428  1328
7   1475  1534  1493  1420  1353  1228  1259  1250  1204  1108
8   1353  1358  1255  1229  1251  1169  1132  1134  1083   961
9   1175  1225  1154  1008  1089   981  1027  1025   957   885

Thus, we now have a force of mortality matrix https://latex.codecogs.com/gif.latex?\boldsymbol{\mu}=[\mu_{x,t}], or surface

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.04.png

It is also possible to use R packages to estimate a Lee-Carter model of the mortality rate,

https://latex.codecogs.com/gif.latex?\log%20\mu%20_{x,t}%20=\alpha%20_{x}%20+\beta%20_{x}%20\cdot%20\kappa_{t}%20+\varepsilon%20_{x,t}

> library(demography)
> MUH =matrix(DEATH$Male/EXPOSURE$Male,nL,nC)
> POPH=matrix(EXPOSURE$Male,nL,nC)
> BASEH <- demogdata(data=MUH, pop=POPH, ages=AGE, years=YEAR, type="mortality",
+ label="France", name="Hommes", lambda=1)
> RES=residuals(LCH,"pearson")

One can easily study residuals, for instance as a function of the age,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.15.png

or a function of the year,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.22.png

Some more details can be found in the second part of the notes of the crash courses of last summer, in Meielisalp.

R is also interesting because of its huge number of libraries, that can be used for predictive modeling. One can easily use smoothing functions in regression, or regression trees,

> TREE = tree((nbr>0)~ageconducteur,data=sinistres,split="gini",mincut = 1)
> age = data.frame(ageconducteur=18:90)
> y1 = predict(TREE,age)
> reg = glm((nbr>0)~bs(ageconducteur),data=sinistres,family="binomial")
> y = predict(reg,age,type="response")

http://freakonometrics.hypotheses.org/files/2013/01/predictive-gam-tree.png

Some practitioners might be scared because the legend claims that R is not as good as SAS to handle large databases. Actually, a lot of functions can be used to import datasets. The most convenient one is probably

> baseCOUT = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  sep=";",header=TRUE,encoding="latin1")
>  tail(baseCOUT,4)
     numeropol  debut_pol    fin_pol freq_paiement langue  type_prof alimentation type_territoire
6512     87291 2002-10-16 2003-01-22       mensuel      A Professeur   Vegetarien          Urbain
6513     87301 2002-10-01 2003-09-30       mensuel      A Technicien   Vegetarien          Urbain
6514     87417 2002-10-24 2003-10-21       mensuel      F Technicien   Vegetalien     Semi-urbain
6515     88128 2003-01-17 2004-01-16       mensuel      F     Avocat   Vegetarien     Semi-urbain
             utilisation presence_alarme marque_voiture sexe exposition age duree_permis age_vehicule i   coutsin
6512 Travail-occasionnel             oui           FORD    M  0.2684932  47           29           28 1 1274.5901
6513              Loisir             oui          HONDA    M  0.9972603  44           24           25 1  278.0745
6514 Travail-occasionnel             non     VOLKSWAGEN    F  0.9917808  23            3           11 1  403.1242
6515              Loisir             non           FIAT    F  0.9972603  23            4           11 1  230.9565

But if the dataset is too large, it is also possible to specify which variables might be interesting, using

> mycols = rep("NULL", 18)
> mycols[c(1,4,5,12,13,14,18)] <- NA
> baseCOUTsubC = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  colClasses = mycols,sep=";",header=TRUE,encoding="latin1")
> head(baseCOUTsubC,4)
  numeropol freq_paiement langue sexe exposition age    coutsin
1         6        annuel      A    M  0.9945205  42   279.5839
2        27       mensuel      F    M  0.2438356  51   814.1677
3        27       mensuel      F    M  1.0000000  53   136.8634
4        76       mensuel      F    F  1.0000000  42   608.7267

It is also possible (before running a code on the entire dataset) to import only the first lines of the dataset.

> baseCOUTsubCR = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  colClasses = mycols,sep=";",header=TRUE,encoding="latin1",nrows=100)
> tail(baseCOUTsubCR,4)
    numeropol freq_paiement langue sexe exposition age   coutsin
97       1193       mensuel      F    F  0.9972603  55  265.0621
98       1204       mensuel      F    F  0.9972603  38 9547.7267
99       1231       mensuel      F    M  1.0000000  40  442.7267
100      1245        annuel      F    F  0.6767123  48  179.1925

It is also possible to import a zipped file. The file itself has a smaller size, and it can usually be imported faster.

> import.zip = function(file){
+ temp = tempfile()
+ download.file(file,temp);
+ read.table(unz(temp, "baseFREQ.csv"),sep=";",header=TRUE,encoding="latin1")}
> system.time(import.zip("http://freakonometrics.free.fr/baseFREQ.csv.zip"))
trying URL 'http://freakonometrics.free.fr/baseFREQ.csv.zip'
Content type 'application/zip' length 692655 bytes (676 Kb)
opened URL
==================================================
downloaded 676 Kb
   user  system elapsed 
      0.762       0.029       4.578 
> system.time(read.table("http://freakonometrics.free.fr/baseFREQ.csv", 
+ sep=";",header=TRUE,encoding="latin1"))
   user  system elapsed 
      0.591       0.072       9.277

Finally, note that it is possible to import any kind of dataset, not only a text file. Even a Microsoft Excel folder. On a Windows computer, one can use SQL queries

> sheet = "c:\\Documents and Settings\\user\\excelsheet.xls"
> connection = odbcConnectExcel(sheet)
> spreadsheet = sqlTables(connection)
> query = paste("SELECT * FROM",spreadsheet$TABLE_NAME[1],sep=" ")
> result = sqlQuery(connection,query)

Then, once the dataset is imported, several functions can be used,

> cost = aggregate(coutsin~ AgeSex,mean, data=baseCOUT)
> frequency = merge(aggregate(nbsin~ AgeSex,sum, data=baseFREQ),
+ aggregate(exposition~ AgeSex,sum, data=baseFREQ))
> frequency$freq = frequency$nbsin/frequency$exposition
> base.freq.cost = merge(frequency, cost)

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/cost-freq-qc.png

Finally, R is interesting for its graphical interface. “If you can picture it in your head, chances are good that you can make it work in R. R makes it easy to read data, generate lines and points, and place them where you want them. Its very flexible and super quick. When youve only got two or three hours until deadline, R can be brilliant” as said Amanda Cox, a graphics editor at the New York Times. “R is particularly valuable in deadline situations when data is scant and time is precious.”.
Several cases were considered on the blog http ://chartsnthings.tumblr.com/…. First, we start with a simple graph, here State Government control in the US

http://freakonometrics.hypotheses.org/files/2013/01/nyt-chartsnthings-1.png

Then try to find a nice visual representation, e.g.

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/nyt-chartsnehings-2.png

And finally, you can just print it in your favorite newspaper,

http://freakonometrics.hypotheses.org/files/2013/01/nyt-chartsnthings-3.jpg

And you can get any kind of graphs,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/nyt-6.png

And not only about politics,

http://freakonometrics.hypotheses.org/files/2013/01/nyt-7-b.jpg Graphs are important. “Its not just about producing graphics for publication. Its about playing around and making a bunch of graphics that help you explore your data. This kind of graphical analysis is a really useful way to help you understand what you’re dealing with, because if you cant see it, you cant really understand it. But when you start graphing it out, you can really see what you’ve got” as said Peter Aldhous, San Francisco bureau chief of New Scientist magazine. Even for actuaries. “The commercial insurance underwriting process was rigorous but also quite subjective and based on intuition. R enables us to communicate our analytic results in appealing and innovative ways to non-technical audiences through rapid development lifecycles. R helps us show our clients how they can improve their processes and effectiveness by enabling our consultants to conduct analyses efficiently”, as explained by John Lucker, team of advanced analytics professionals at Deloitte Consulting Principal, in http://blog.revolutionanalytics.com/r-is-hot/. See also Andrew Gelman’s view, on graphs, http://www.stat.columbia.edu/…

So yes, actuaries might be interested to use R for actuarial communication, as mentioned in http ://www.londonr.org/…

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-4.png

The Actuarial Toolkit (see http ://www.actuaries.org.uk/…) stresses the interest of R, “The power of the language R lies with its functions for statistical modelling, data analysis and graphics ; its ability to read and write data from various data sources; as well as the opportunity to embed R in excel or other languages like VBA. In the way SAS is good for data manipulations, R is superior for modelling and graphical output“.

From 2011, Asia Capital Reinsurance Group (ACR) uses R to Solve Big Data Challenges (see http ://www.reuters.com/…). And Lloyd’s uses motion charts created with R to provide analysis to investors (as discussed on http ://blog.revolutionanalytics.com/…)

A lot of information can be found on http ://jeffreybreen.wordpress.com/…

http://freakonometrics.hypotheses.org/files/2013/01/6a010534b1db25970b01538fea1796970b-800wi.png

Markus Gesmann mentioned on his blog a lot of interesting graphs used for actuarial reporting, http ://lamages.blogspot.ca/…

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-15.37.33.png

Further, R is free. Which can be compared with SAS, $6,000 per PC, or $28,000 per processor on a server (as mentioned on http ://en.wikipedia.org/…)

It is also becoming more and more popular, as a programming language. As mentioned on this month Transparent Language Popularity (see http ://lang-index.sourceforge.net/), R is ranked 12. Far away after C or Java, but before Matlab (22) or SAS (27). On StackOverFlow (see http ://stackoverflow.com/) is also far being C++ (399,232 occurrences) or Java (348,418), but with 21,818 occurrences, it appears before Matlab (14,580) and SAS (899). As mentioned on http ://r4stats.com/articles/popularity/ R is becoming more and more popular, on listserv discussion traffic

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/fig_1_listserv.png

It is clearly the most popular software in data analysis, as mentioned by the Rexer Analytics survey, in 2009

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/fig_3_rexersurvey.png

What about actuaries ? In a survey (see http ://palisade.com/…), R was not extremely popular.

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-1.png

If we consider only statistical softwares, SAS is still far ahead, among UK and CAS actuaries

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-2.png

But, as mentioned by Mike King, Quantitative Analyst, Bank of America, “I cant think of any programming language that has such an incredible community of users. If you have a question, you can get it answered quickly by leaders in the field. That means very little downtime.” This was also mentioned by Glenn Meyers, in the Actuarial Review “The most powerful reason for using R is the community” (in http ://nytimes.com/…). For instance, http ://r-bloggers.com/ has contributions from more than 425 R users.

As said by Bo Cowgill, from Google “The best thing about R is that it was developed by statisticians. The worst thing about R is that it was developed by statisticians.