Tag Archives: modeling

Reverse Engineering with Correlated Features

In econometric modeling, I usually have a problem with correlated features. A few weeks ago, I was discussing feature selection when features are correlated. This week, I was wondering about reverse engineering when features might be correlated (not to say very correlated). The way I see reverse engineering is the following

  1. someone has some dataset, and based on that dataset, a model was fitted. But we cannot see how it works….
  2. we can generate “fake data”, feed the model with those data, and get predictions
  3. based on those predictions, we wish we can fit a model that should be close to the the ‘true’ model used
  4. one way to measure how good our model is is to compare predictions on the initial data with our model with the original dataset (or the initial ‘true’ values if we use generated datasets).

Continue reading Reverse Engineering with Correlated Features

“Improving Segmentation” (using Lorenz curves, or sort of)

This afternoon, André did send me an interesting graph about the use of Lorenz curve in the context of insurance pricing (and modeling)

It is some sort of Lorenz curve, upside-down, with on the x-axis the proportion of the population, and on the y-axis the proportion of the losses. The important point is that the population is sorted according the their risk, i.e. their premium. The code to generate such a curve is actually quite simple,

L <- function(u,varx="premium",vary="losses"){
  base=base[order(base[,varx],decreasing=TRUE),]
  base$cum=(1:nrow(base))/nrow(base)
return(sum(base[base$cum<=u,vary])/
sum(base[,vary]))}
 
vu=seq(0,1,by=.01)
vv=Vectorize(function(u) L(u))(vu)

My concern was more on two labels on the figure, with on the top-left “perfect pricing” and on the first diagonal “average pricing“. What could that possibly mean? Is there even such a thing as a “perfect pricing“? In order to understand what we plot here, let us generate some dataset, and fit some model. Including things that might be seen as the “perfect model“: the price base on the parameters used to generate the data, and the model used to generate the data, fitted on the data.

Continue reading “Improving Segmentation” (using Lorenz curves, or sort of)

Predictive Modeling

Tomorrow, around noon, I will be giving a talk on predictive modeling for actuaries. In the introduction, I will get back shortly on the idea that a prediction is usually a best estimate, in the sense of getting an expected value. And because

https://latex.codecogs.com/gif.latex?\mathbb{E}(X)=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left([X-c]^2\right)\}=\underset{c\in\mathbb{R}}{\text{argmin}}\{\mathbb{E}\left(||X-c||_{L_2}\right)\}

it is natural to use least square ideas. In order to illustrate all those concepts, we will use a simple dataset, with the sex, the height and the weight of a person, as well as declared weight.

Davis=read.table(
"http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt")

Since there is a typo in this dataset, we have to invert to figures

Davis[12,c(2,3)]=Davis[12,c(3,2)]

but it’s not a big deal. The variable of interest, here, is someone’s weight

attach(Davis)
Y=weight*2.204622

(here in pounds). We will use explanatory variables such as the sex of that person, or his/her height

X=Davis$height / 30.48

(in inches). So, we will start with the (standard) linear model, just to make sure that we all talk about the same thing.

The goal will be to use (possible) explanatory variable to improve our prediction. We will start with the standard linear model, but we will see that nonlinear models can also easily be obtained,

Non linearities will be discussed. But those models are Gaussian (as mentioned above). And homoscedastic. So we will see how generalized linear models can be used to model the mean and the variance, at the same time. For instance, with a Poisson regression (below), the variance will increase with the expected value.

After this general introduction, we will spend some time on 0-1 variables. We will see how to use a logistic regression, and also discuss more generally which kind of models can be used for classification. ROC curves will be presented, and explained.

Then, we will also see an alternative to the logistic model, namely classification trees and CART techniques

We will also discuss random forrests, bagging and boosting techniques

pdf version of the slides can be downloaded.

SOA Webinar on Predictive Modeling

I will give, with Qichun Xu, a joint webinar for the Reinsurance Council and the Futurism Council of the Society of Actuaries, on Perspectives of Predictive Modeling with Case Studies in a few days. The slides of my talk are now available (I do recommand to open the pdf version of the slides with Acrobat, since there are animated pictures in the slides that could not be visualized below for instance). The Society of Actuaries asked specifically for a powerpoint document, so I will use screenshots of the slides for the webinar. I do encourage to open and read the pdf file for a better quality… Sorry for the inconvenience. I will upload soon lines of codes to reproduce most of the graphs. All comments and remarks are welcome.