Tag Archives: actuarial

Measuring and mitigating biases in motor insurance pricing

Our paper, with Mulah Moriah and Franck Vermet, Measuring and mitigating biases in motor insurance pricing, has been recenlty published in the European Actuarial Journal,

The non-life insurance sector operates within a highly competitive and tightly regulated framework, confronting a pivotal juncture in the formulation of pricing strategies. Insurers are compelled to harness a range of statistical methodologies and available data to construct optimal pricing structures that align with the overarching corporate strategy while accommodating the dynamics of market competition. Given the fundamental societal role played by insurance, premium rates are subject to rigorous scrutiny by regulatory authorities. Consequently, the act of pricing transcends mere statistical calculations and carries the weight of strategic and societal factors. These multifaceted concerns may drive insurers to establish equitable premiums, considering various variables. For instance, regulations mandate the provision of equitable premiums, considering factors such as policyholder gender. Or mutualist groups in accordance with respective corporate strategies can implement age-based premium fairness. In certain insurance domains, the presence of serious illnesses or disabilities are emerging as new dimensions for evaluating fairness. Regardless of the motivating factor prompting an insurer to adopt fairer pricing strategies for a specific variable, the insurer must possess the capability to define, measure, and ultimately mitigate any fairness biases inherent in its pricing practices while upholding standards of consistency and performance. This study seeks to provide a comprehensive set of tools for these endeavors and assess their effectiveness through practical application in the context of automobile insurance. Results show that fairness bias can be found in historical data and models, and that fairer outcomes can be obtained by more fairness-aware approaches.

Computational Actuarial Science

Last week, we’ve been through the book, completely, one last time, before sending it back to the publisher, with some comments and remarks, before publication ! So, this is it, the book will finally appear soon ! It was schedule for this week actually, but… you know. It should appear sometime by the end of May, or beginning of June. I will keep you posted on this blog.

A few months ago, we published with Christophe Dutang an ebook on the same topic, in French, online on cran.r-project.org/doc/contrib/. This contribution was based on lecture notes we had. When I’ve been asked to publish an English version, by John Kimmel, I was honored, but I thought it would be some kind of fraud if I write a book on that topic. I do know a bit of actuarial science, and a bit of R, but most of the advanced computations rely on others packages. Because I am extremely lazy, I did not try (so far) to edit my own package. I frequently publish some lines of codes on my blog, but nothing too serious.

So, for this book, I decided to ask those who actually did publish a package used in actuarial computations (or who did work previously on packages comparison for instance) to write a chapter, in this book. I am usually not a big fan of books with twenty contributors, because there is no coherence. So here, my task was to link all those chapters together, to make sure that notations are coherent, etc. Over 700 pages, that was difficult. And I asked all of them not only to illustrate actuarial concepts with some R code, but also give – if possible – some self written function, to understand the algorithm, but also some built-in functions. The goal was to explain the core of the algorithm. Some codes might not be efficient, but they help to understand how it could be possible to compute some actuarial quantities.

The first chapter is an Introduction (to the R language) I wrote with Rob Kaas (everyone in the actuarial community knows Rob ! not only as the Editor of Insurance: Mathematics & Economics. but also as a prolific author, including the popular textbook Modern Actuarial Risk Theory – Using R). The aim is to help those who might use another language for actuarial computation to understand the basics of the grammar, to read and write in R. I will probably publish a longer post, to explain the structure of that chapter. And show some codes.

  • Methodology 

The first section is a very general methodology section. It starts with Standard Statistical Inference by Christophe Dutang (Christophe is extremely active in the R community, as the maintainer of the Distributions task view page, for instance).Then, Ben Escoto (Ben works in the insurance industry, and launched the actuarial vignettes in R a few years ago) and myself, wrote a chapter which can be seen as an introduction to the Bayesian Philosophy for actuaries (I will give a talk on that topic at the R in Insurance conference this summer, so additional material will be online soon). With Stéphane Tufféry, we Statistical Learning (Stéphane published a Data Mining and Statistics for Decision Making a few years ago). Then, I wanted a chapter dedicated to Spatial Analysis. I did ask Renato Assunção, Marcelo Azevedo Costa, Marcos Oliveira Prates, and Luís Gustavo Silva e Silva to write that chapter (I met them a few years ago while I was visiting Renato in Belo Horizonte, while that started to work on spatial aspects of actuarial science). And finally, Eric Gilleland and Mathieu Ribatet wrote the chapter on Reinsurance and Extremal Events (both of them work on climate and extreme values, and they did publish a very interesting software review for extreme value analysis a few years ago).

  • Life Insurance

For the section on life insurance, I asked Giorgio Spedicato to write the chapter on Life Contingencies (Giorgio is the author of the lifecontingencies package). Then Heather Booth, Rob J. Hyndman, and Leonie Tickle agreed to write the chapter on Prospective Life Tables (here we have a great match, with a demographer, an actuary, and… Rob. Every one who studied time series knows Rob. He is the author of the amazing forecast package, as well as the demography package, among many others. And he has a great blog too). To go further, Julien Tomas and Frédéric Planchet wrote the chapter on Prospective Mortality Tables and Portfolio Experience (both of them published the ELT – experience life tables – package a few months ago). And finally, there is a chapter on Survival Analysis by Frédéric Planchet  and Pierre-E. Thérond (they did publish a book – in French – on survival analysis for actuarial science, with examples in R).

  • Finance

For the section on financial computations, Yohan Chalabi and Diethelm Würtz wrote two chapters, one on Stock Prices and Time Series and one on Portfolio Allocation (both of them have worked on the Rmetrics project, with the timeSeriesfArmafGarchfPortfolio packages). And Sergio S. Guirreri wrote a chapter on Yield Curves and Interest Rates Models (Sergio is the author of the YieldCurve package).

  • Non-Life Insurance

Last, but not least, there is a section on non-life insuranceJean-Philippe Boucher (who published several articles on counts models) and myself, wrote the chapter on General Insurance Pricing. Then, I asked Katrien Antonio, Peng Shi, and Frank van Berkum to go further, with a chapter on Longitudinal Data and Experience Rating (I know Katrien from my PhD, and she was already working on that topic by that time… she did publish great surveys on that topic). And finally, Claims Reserving and IBNR is a chapter I wanted to write, because it’s a topic I love, but I asked Markus Gesmann to write it (Markus is known not only for his googleVis package, but also for the ChainLadder package – not to mention his awesome blog).

I will try to post some additional material on this blog, with R code (of course), graphs, and slides. And probably some pdfs with answers for the exercises. And all the datasets will be available in a CASdatasets package (online soon).

Amsterdam

I will be in Amsterdam for the end of this week. I will be in the jury of the PhD defense of Julien Tomas, entitled “Quantifying Biometric Life Insurance Risks With Non-Parametric Smoothing Methods” (the thesis will probably be online soon). But before, I will give a talk at the actuarial seminar at UvA. My visit last time was a real pleasure, and it should be the same this time too. I will give a talk this Thursday on “R for actuarial science“. The slides can be downloaded from here.

R for actuarial science

As mentioned in the Appendix of Modern Actuarial Risk Theory, “R (and S) is the ‘lingua franca’ of data analysis and statistical computing, used in academia, climate research, computer science, bioinformatics, pharmaceutical industry, customer analytics, data mining, finance and by some insurers. Apart from being stable, fast, always up-to-date and very versatile, the chief advantage of R is that it is available to everyone free of charge. It has extensive and powerful graphics abilities, and is developing rapidly, being the statistical tool of choice in many academic environments.

R is based on the S statistical programming language developed by Joe Chambers at Bell labs in the 80’s. To be more specific, R is an open-source implementation of the S language, developed by Robert Gentlemn and Ross Ihaka. It is a vector based language, which makes it extremely interesting for actuarial computations. For instance, consider some Life Tables,

> TD[39:52,]       > TV[39:52,]
     Age    Lx         Age    Lx
  39  38 95237          38 97753
  40  39 94997          39 97648
  41  40 94746          40 97534
  42  41 94476          41 97413
  43  42 94182          42 97282
  44  43 93868          43 97138
  45  44 93515          44 96981
  46  45 93133          45 96810
  47  46 92727          46 96622
  48  47 92295          47 96424
  49  48 91833          48 96218
  50  49 91332          49 95995
  51  50 90778          50 95752
  52  51 90171          51 95488

Those (French) Life Tables can be found here

> TD <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TD8890.csv",sep=";",header=TRUE)
> TV <- read.table(
+ "https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",sep=";",header=TRUE)

From those vectors, it is possible to construct the matrix of death probabilities, https://latex.codecogs.com/gif.latex?\boldsymbol{P}=[\text{%20}_{k}p_x], using for instance

>  Lx <- TD$Lx
>  m <- length(Lx)
>  p <- matrix(0,m,m); d <- p
>  for(i in 1:(m-1)){
+  p[1:(m-i),i] <- Lx[1+(i+1):m]/Lx[i+1]
+  d[1:(m-i),i] <- (Lx[(1+i):(m)]-Lx[(1+i):(m)+1])/Lx[i+1]}
>  diag(d[(m-1):1,]) <- 0
>  diag(p[(m-1):1,]) <- 0
>  q <- 1-p

One can compute easily, e.g., the (curtate) expectation of life defined as

https://latex.codecogs.com/gif.latex?e_x%20=\mathbb{E}(K_x)=\sum_{k=1}^\infty%20k\cdot%20\text{%20}_{k|1}q_x%20=%20\sum_{k=1}^\infty%20\text{%20}_{k}p_x

and one can compute the vector of life expectancy, at various ages https://latex.codecogs.com/gif.latex?\boldsymbol{e}=[e_x], as

> life.exp = function(x){sum(p[1:nrow(p),x])}
> e = Vectorize(life.exp)(1:m)

An actually, any kind of actuarial quantity can be derived from those matrices. The expected present value (or actuarial value) of a temporary life annuity-due is, for instance,

https://latex.codecogs.com/gif.latex?\ddot{a}_{x:\overline{n}|}=\sum_{k=0}^{n-1}%20\nu^k%20\cdot%20{}_{k}p_x%20=\frac{1-A_{x:\overline{n}|}}{1-\nu}

The code to compute those functions is here

> for(j in 1:(m-1)){ adots[,j]<-cumsum(1/(1+i)^(0:(m-1))*c(1,p[1:(m-1),j])) }

or consider the expected present value of a term insurance

https://latex.codecogs.com/gif.latex?%20A^1_{x:\overline{n}|}%20=\sum_{k=0}^{n-1}%20\nu^{k+1}%20\cdot%20\text{%20}_{k|}q_x

with the following code

> for(j in 1:(m-1)){ A[,j]<-cumsum(1/(1+i)^(1:m)*d[,j]) }

Some more details can be found in the first part of the notes of the crash courses of last summer, in Meielisalp. Vector – or matrices – are extremely convenient to work with, when dealing with life contingencies. It is also possible to model prospective mortality. Here, the mortality is not only function of the age https://latex.codecogs.com/gif.latex?x, but also time https://latex.codecogs.com/gif.latex?t,

> t(DTF)[1:10,1:10]
    1899  1900  1901  1902  1903  1904  1905  1906  1907  1908
0  64039 61635 56421 53321 52573 54947 50720 53734 47255 46997
1  12119 11293 10293 10616 10251 10514  9340 10262 10104  9517
2   6983  6091  5853  5734  5673  5494  5028  5232  4477  4094
3   4329  3953  3748  3654  3382  3283  3294  3262  2912  2721
4   3220  3063  2936  2710  2500  2360  2381  2505  2213  2078
5   2284  2149  2172  2020  1932  1770  1788  1782  1789  1751
6   1834  1836  1761  1651  1664  1433  1448  1517  1428  1328
7   1475  1534  1493  1420  1353  1228  1259  1250  1204  1108
8   1353  1358  1255  1229  1251  1169  1132  1134  1083   961
9   1175  1225  1154  1008  1089   981  1027  1025   957   885

Thus, we now have a force of mortality matrix https://latex.codecogs.com/gif.latex?\boldsymbol{\mu}=[\mu_{x,t}], or surface

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.04.png

It is also possible to use R packages to estimate a Lee-Carter model of the mortality rate,

https://latex.codecogs.com/gif.latex?\log%20\mu%20_{x,t}%20=\alpha%20_{x}%20+\beta%20_{x}%20\cdot%20\kappa_{t}%20+\varepsilon%20_{x,t}

> library(demography)
> MUH =matrix(DEATH$Male/EXPOSURE$Male,nL,nC)
> POPH=matrix(EXPOSURE$Male,nL,nC)
> BASEH <- demogdata(data=MUH, pop=POPH, ages=AGE, years=YEAR, type="mortality",
+ label="France", name="Hommes", lambda=1)
> RES=residuals(LCH,"pearson")

One can easily study residuals, for instance as a function of the age,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.15.png

or a function of the year,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-14.29.22.png

Some more details can be found in the second part of the notes of the crash courses of last summer, in Meielisalp.

R is also interesting because of its huge number of libraries, that can be used for predictive modeling. One can easily use smoothing functions in regression, or regression trees,

> TREE = tree((nbr>0)~ageconducteur,data=sinistres,split="gini",mincut = 1)
> age = data.frame(ageconducteur=18:90)
> y1 = predict(TREE,age)
> reg = glm((nbr>0)~bs(ageconducteur),data=sinistres,family="binomial")
> y = predict(reg,age,type="response")

http://freakonometrics.hypotheses.org/files/2013/01/predictive-gam-tree.png

Some practitioners might be scared because the legend claims that R is not as good as SAS to handle large databases. Actually, a lot of functions can be used to import datasets. The most convenient one is probably

> baseCOUT = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  sep=";",header=TRUE,encoding="latin1")
>  tail(baseCOUT,4)
     numeropol  debut_pol    fin_pol freq_paiement langue  type_prof alimentation type_territoire
6512     87291 2002-10-16 2003-01-22       mensuel      A Professeur   Vegetarien          Urbain
6513     87301 2002-10-01 2003-09-30       mensuel      A Technicien   Vegetarien          Urbain
6514     87417 2002-10-24 2003-10-21       mensuel      F Technicien   Vegetalien     Semi-urbain
6515     88128 2003-01-17 2004-01-16       mensuel      F     Avocat   Vegetarien     Semi-urbain
             utilisation presence_alarme marque_voiture sexe exposition age duree_permis age_vehicule i   coutsin
6512 Travail-occasionnel             oui           FORD    M  0.2684932  47           29           28 1 1274.5901
6513              Loisir             oui          HONDA    M  0.9972603  44           24           25 1  278.0745
6514 Travail-occasionnel             non     VOLKSWAGEN    F  0.9917808  23            3           11 1  403.1242
6515              Loisir             non           FIAT    F  0.9972603  23            4           11 1  230.9565

But if the dataset is too large, it is also possible to specify which variables might be interesting, using

> mycols = rep("NULL", 18)
> mycols[c(1,4,5,12,13,14,18)] <- NA
> baseCOUTsubC = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  colClasses = mycols,sep=";",header=TRUE,encoding="latin1")
> head(baseCOUTsubC,4)
  numeropol freq_paiement langue sexe exposition age    coutsin
1         6        annuel      A    M  0.9945205  42   279.5839
2        27       mensuel      F    M  0.2438356  51   814.1677
3        27       mensuel      F    M  1.0000000  53   136.8634
4        76       mensuel      F    F  1.0000000  42   608.7267

It is also possible (before running a code on the entire dataset) to import only the first lines of the dataset.

> baseCOUTsubCR = read.table("http://freakonometrics.free.fr/baseCOUT.csv",
+  colClasses = mycols,sep=";",header=TRUE,encoding="latin1",nrows=100)
> tail(baseCOUTsubCR,4)
    numeropol freq_paiement langue sexe exposition age   coutsin
97       1193       mensuel      F    F  0.9972603  55  265.0621
98       1204       mensuel      F    F  0.9972603  38 9547.7267
99       1231       mensuel      F    M  1.0000000  40  442.7267
100      1245        annuel      F    F  0.6767123  48  179.1925

It is also possible to import a zipped file. The file itself has a smaller size, and it can usually be imported faster.

> import.zip = function(file){
+ temp = tempfile()
+ download.file(file,temp);
+ read.table(unz(temp, "baseFREQ.csv"),sep=";",header=TRUE,encoding="latin1")}
> system.time(import.zip("http://freakonometrics.free.fr/baseFREQ.csv.zip"))
trying URL 'http://freakonometrics.free.fr/baseFREQ.csv.zip'
Content type 'application/zip' length 692655 bytes (676 Kb)
opened URL
==================================================
downloaded 676 Kb
   user  system elapsed 
      0.762       0.029       4.578 
> system.time(read.table("http://freakonometrics.free.fr/baseFREQ.csv", 
+ sep=";",header=TRUE,encoding="latin1"))
   user  system elapsed 
      0.591       0.072       9.277

Finally, note that it is possible to import any kind of dataset, not only a text file. Even a Microsoft Excel folder. On a Windows computer, one can use SQL queries

> sheet = "c:\\Documents and Settings\\user\\excelsheet.xls"
> connection = odbcConnectExcel(sheet)
> spreadsheet = sqlTables(connection)
> query = paste("SELECT * FROM",spreadsheet$TABLE_NAME[1],sep=" ")
> result = sqlQuery(connection,query)

Then, once the dataset is imported, several functions can be used,

> cost = aggregate(coutsin~ AgeSex,mean, data=baseCOUT)
> frequency = merge(aggregate(nbsin~ AgeSex,sum, data=baseFREQ),
+ aggregate(exposition~ AgeSex,sum, data=baseFREQ))
> frequency$freq = frequency$nbsin/frequency$exposition
> base.freq.cost = merge(frequency, cost)

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/cost-freq-qc.png

Finally, R is interesting for its graphical interface. “If you can picture it in your head, chances are good that you can make it work in R. R makes it easy to read data, generate lines and points, and place them where you want them. Its very flexible and super quick. When youve only got two or three hours until deadline, R can be brilliant” as said Amanda Cox, a graphics editor at the New York Times. “R is particularly valuable in deadline situations when data is scant and time is precious.”.
Several cases were considered on the blog http ://chartsnthings.tumblr.com/…. First, we start with a simple graph, here State Government control in the US

http://freakonometrics.hypotheses.org/files/2013/01/nyt-chartsnthings-1.png

Then try to find a nice visual representation, e.g.

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/nyt-chartsnehings-2.png

And finally, you can just print it in your favorite newspaper,

http://freakonometrics.hypotheses.org/files/2013/01/nyt-chartsnthings-3.jpg

And you can get any kind of graphs,

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/nyt-6.png

And not only about politics,

http://freakonometrics.hypotheses.org/files/2013/01/nyt-7-b.jpg Graphs are important. “Its not just about producing graphics for publication. Its about playing around and making a bunch of graphics that help you explore your data. This kind of graphical analysis is a really useful way to help you understand what you’re dealing with, because if you cant see it, you cant really understand it. But when you start graphing it out, you can really see what you’ve got” as said Peter Aldhous, San Francisco bureau chief of New Scientist magazine. Even for actuaries. “The commercial insurance underwriting process was rigorous but also quite subjective and based on intuition. R enables us to communicate our analytic results in appealing and innovative ways to non-technical audiences through rapid development lifecycles. R helps us show our clients how they can improve their processes and effectiveness by enabling our consultants to conduct analyses efficiently”, as explained by John Lucker, team of advanced analytics professionals at Deloitte Consulting Principal, in http://blog.revolutionanalytics.com/r-is-hot/. See also Andrew Gelman’s view, on graphs, http://www.stat.columbia.edu/…

So yes, actuaries might be interested to use R for actuarial communication, as mentioned in http ://www.londonr.org/…

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-4.png

The Actuarial Toolkit (see http ://www.actuaries.org.uk/…) stresses the interest of R, “The power of the language R lies with its functions for statistical modelling, data analysis and graphics ; its ability to read and write data from various data sources; as well as the opportunity to embed R in excel or other languages like VBA. In the way SAS is good for data manipulations, R is superior for modelling and graphical output“.

From 2011, Asia Capital Reinsurance Group (ACR) uses R to Solve Big Data Challenges (see http ://www.reuters.com/…). And Lloyd’s uses motion charts created with R to provide analysis to investors (as discussed on http ://blog.revolutionanalytics.com/…)

A lot of information can be found on http ://jeffreybreen.wordpress.com/…

http://freakonometrics.hypotheses.org/files/2013/01/6a010534b1db25970b01538fea1796970b-800wi.png

Markus Gesmann mentioned on his blog a lot of interesting graphs used for actuarial reporting, http ://lamages.blogspot.ca/…

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/Capture-d%E2%80%99e%CC%81cran-2013-01-10-a%CC%80-15.37.33.png

Further, R is free. Which can be compared with SAS, $6,000 per PC, or $28,000 per processor on a server (as mentioned on http ://en.wikipedia.org/…)

It is also becoming more and more popular, as a programming language. As mentioned on this month Transparent Language Popularity (see http ://lang-index.sourceforge.net/), R is ranked 12. Far away after C or Java, but before Matlab (22) or SAS (27). On StackOverFlow (see http ://stackoverflow.com/) is also far being C++ (399,232 occurrences) or Java (348,418), but with 21,818 occurrences, it appears before Matlab (14,580) and SAS (899). As mentioned on http ://r4stats.com/articles/popularity/ R is becoming more and more popular, on listserv discussion traffic

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/fig_1_listserv.png

It is clearly the most popular software in data analysis, as mentioned by the Rexer Analytics survey, in 2009

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/fig_3_rexersurvey.png

What about actuaries ? In a survey (see http ://palisade.com/…), R was not extremely popular.

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-1.png

If we consider only statistical softwares, SAS is still far ahead, among UK and CAS actuaries

http://freakonometrics.hypotheses.org/wp-content/blogs.dir/253/files/2013/01/mango-R-2.png

But, as mentioned by Mike King, Quantitative Analyst, Bank of America, “I cant think of any programming language that has such an incredible community of users. If you have a question, you can get it answered quickly by leaders in the field. That means very little downtime.” This was also mentioned by Glenn Meyers, in the Actuarial Review “The most powerful reason for using R is the community” (in http ://nytimes.com/…). For instance, http ://r-bloggers.com/ has contributions from more than 425 R users.

As said by Bo Cowgill, from Google “The best thing about R is that it was developed by statisticians. The worst thing about R is that it was developed by statisticians.

Uncertainty in claims reserving (WIM)

I will be talking on Friday, at the 1st Québec-Ontario Workshop on Insurance Mathematics (so called WIN, already mentioned here). The program is now online here. My talk will be on Solvency II requirements in claims reserving (slides can be found here). Even if Canada will not adapt the Solvency II European capital test, it looks like Solvency II will matter to Canadian insurance companies (as mentioned here).

Finding roots of functions in actuarial science

The following simple code can be used to find roots of functions (based on the secant algorithm),

secant=function(fun, x0, x1, tolerence=1e-07, niter=500){
for ( i in 1:niter ) {
	x2 <- x1-fun(x1)*(x1-x0)/(fun(x1)-fun(x0))
	if (abs(fun(x2)) < tolerence)
		return(x2)
	x0 <- x1
	x1 <- x2
}}

It can be interesting in actuarial science, e.g. to find the actuarial rate so that to present values are equal. For instance, consider the following capital, given only if the insured is still alive (this example was initially considered here). We would like to find the rate so that the probable discounted value is 600,

> Lx=read.table("https://perso.univ-rennes1.fr/arthur.charpentier/TV8890.csv",
+ header=TRUE,sep=";")
> capital=c(100,100,125,125,150,150)
> n=length(capital)
> x=0.035
> X=45
> f=function(x){
+ capital.act=capital*(1/(1+x))^(1:n)
+ PROBA=Lx[((Lx[,1]>X)*(Lx[,1]<=(X+n)))==1,2]/Lx[(Lx[,1]==X)==1,2]
+ return(sum(capital.act*PROBA))}
>
> f1=function(x){f(x)-600}
> secant(f1,0,0.1)
[1] 0.06022313
> f(0.06022313)
[1] 600

*

Modeling analogies in life and nonlife insurance

On Wednesday afternoon I will be giving a talk at the SCOR Reserving Seminar. The talk will be on modeling analogies in life and nonlife insurance. We will start by discussing data analogies, based on the Lexis diagram in life insurance and in nonlife (when modeling claims dynamics),

This will induce similarities in datasets used in life models, and in nonlife reserving

Further, in the two cases, logPoisson models are usually used, either to model the number of deaths, or the amount of payment. The main difference is that in nonlife insurance, forecasting future payments is rather simple,

But in life models, unfortunately, we need to forecast the behavior of year based parameters.

Note that this is also the case in nonlife insurance when an inflation factor is introduced.
To go further, the slides are available here.