SCOR Foundation for Science Webinar, ML and Econometrics

This week, I will give a talk at the SCOR Foundation for Science webinar (slides are available online). and I was asked to give a talk on econometrics vs IA (or machine learning),

Of course, the two concepts are related, and there is a continuum between them.

As we wrote it Charpentier et al. (2017)

Econometrics and machine learning seem to have one common goal: to construct a predictive model, for a variable of interest, using explanatory variables (or features).

For the purposes of this presentation, we will begin by contrasting the two, emphasizing the differences, and then showing the connections that exist.

Long story short, in between, we have computation statistics, or statistical learning, corresponding to computational techniques with mathematical probabilistic guarantees.

In “Statistical Modeling: The Two Cultures“, Leo Breiman pointed out that there were, in fact, two cultures, which I will refer to hereafter as “econometrics” and “machine learning.”

There are two cultures in the use of statistical modeling to reach conclusions from data. One assumes that the data are generated by a given stochastic data model. The other uses algorithmic models and treats the data mechanism as unknown

In a blog post, “what will be the impact of machine learning on economics?“, Tyler Cowen, quoting Susan Athey, wrote

…in general ML prediction models are built on a premise that is fundamentally at odds with a lot of social science work on causal inference. The foundation of supervised ML methods is that model selection (cross-validation) is carried out to optimize goodness of fit on a test sample. A model is good if and only if it predicts well. Yet, a cornerstone of introductory econometrics is that prediction is not causal inference

A few decades ago, William G. Tomek, in “Structural Econometric Models: Past and Future” mentioned that

the Cowles Commission used an explicit probabilistic framework to consider the data-generating processes for the current endogenous variables.

and similarly, in “Econometrics For Decision Making: Building Foundations Sketched By Haavelmo And Wald“, Charles Manski recalled that

Haavelmo (1944) proposed a probabilistic structure for econometric modeling, aiming to make econometrics useful for decision making.

Clearly, econometrics started with a probabilistic underlying assumption. Economic data should be seen as observation of random variables. That probabilistic framework and the associated stochastic representation can be related to the idea that there is a data-generating process, with explicit stochastic components (corresponding to the “economic model”). Therefore, economists and econometricians usually focus on inference, in the sense that they estimate parameters, and test hypotheses, usually having in mind a causal framework.

That was for “Econometrics”. For “Machine Learning”, I can mention “Some Studies in Machine Learning Using the Game of Checkers“, where Arthur Samuel wrote

Machine learning is the field of study that gives computers the ability to learn without explicitly being programmed

which could still be, almost 70 years after, the definition of machine learning. More precisely,

In machine learning, problems are cast in the form of loss functions, which are optimal when they are minimised. The exact form of the loss function depends on the nature of the problem to be solved, the data available and the type of machine learning algorithm being optimised. Finding appropriate loss functions is, therefore, one of the most important research endeavors in machine learning,

in “a survey and taxonomy of loss functions in machine learning“. In “Deep Learning“, Ian Goodfellow, Yoshua Bengio and Aaron Courville wrote

Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions

So, clearly, machine leaning has to do with optimisation, and accuracy. Leo Breiman wrote

In the data modeling culture, a statistician assumes a stochastic data model and applies inference techniques to estimate the parameters and test hypotheses. In the algorithmic modeling culture, the focus is on predictive accuracy. The model is treated as a black box […] The goal is not interpretability, but accurate prediction,

Optimization, or minimization of a loss function over a dataset, is the ultimate goal in machine learning. The only goal is accuracy, and generalization (as we will discuss later on – even if I did give a long talk on that issue last year)

Now we can go into details. In a nutshell, I will discuss two things

    • I will get back to models, starting with linear models used in the context of regression, and then moving to classification (as I wrote in my slide, I will start with the fact that the first approach we have, in any econometrics courses, is based on some machine learning approach, which could lead to some sort of confusion). My point is that, for example, logistic regression also appears in machine learning, with a very different perspective… so basically, the difference between econometrics and machine learning has to do with the story we tell ourselves,
  • then I will start discussing different concepts, explaining why, even if different approaches lead to similar models, there are still differences between the two approaches. I will discuss generalization and cross-validation, and then talk a little bit about calibration issues,

  • The models

To start with the classical regression problem, first, we must admit that when we talk about “ordinary least squares“, we have a machine learning perspective. There is no probabilistic assumption here, we simply want to minimize the sum of the squares of residuals, if we suppose that we have a linear model, plus an additive error term.

Again, the idea of least squares is that we consider a mathematical optimization problem, that aims to determine the best fit linear function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. And because of the linear structure, we can use linear algebra to get an explicit solution of the parameters that actually are related to an orthogonal projection problem. Again, all that has to do with machine learning, and not econometrics…

To start talking about econometrics, we need additional assumption, i.e., either assume that residuals are Gaussian, or that the distribution of Y, conditional on the covariates \boldsymbol{X}, is Gaussian. Then, since we have a distribution, we can use maximum likelihood estimation (that will give us a lot of interesting tools, including confidence intervals and tests) to estimate the unknown parameters. It turns out that it will be the same as the least square approach actually…

That is something we will see a lot: we will end up with similar quantities, with different paths to get there. If we use our probabilistic Gaussian model, it is possible to derive a lot of interesting results. First, on the parameter \widehat{\boldsymbol{\beta}}, starting with a normality of the estimator (seen as a random variable, as usual in mathematical statistics), and similarly, because of properties of the Gaussian vector, a Gaussian distribution for the out-of-sample predictions, with confidence intervals, several kinds of tests, etc.

But actually, it is possible to get similar properties without any mathematical calculations, or explicit probabilistic assumption.. That is what computational econometrics tells us: if we can mimic our data, we can create fake (but realistic) samples, and run regressions, and then look at empirical distributions of any quantities… Either we do the maths, as in mathematical statistics and econometrics, or we use a computer, and get distributions. Of course, mathematical guarantees will come with probabilistic assumptions, but we can still get heuristics here about what machine learning is about…

Our second example will be the problem of classification, when we want to model, and predict, a categorical variable y, taking either value 0 or 1. The most classical model is probably the logistic function, which was developed as a model of population growth and named “logistic” by Pierre François Verhulst in the 1830s and 1840s, rediscovered in the 1920’s by Raymond Pearl and Lowell Reed. It is based on the assumption that y is the value taken by a Bernoulli random variable Y with a probability that is a function of the covariates \boldsymbol{X}. Obviously, the probability cannot be a linear function of the covariates \boldsymbol{X} (since it might take values larger than one, or even negative). But maybe the logarithm of the odds, that is the ratio of the two probabilities \mathbb{P}[Y=1] over \mathbb{P}[Y=0] could be a linar function.  Thus, we end up with this very classical model, the score of the logistic model is the exponential of the linear predictor, over 1+ the same exponential.

Visually, it is the following. On the left, we have points \boldsymbol{x}, here with two covariates, x_1 (on the x-axis) and x_2 (on the y-axis). And the value of y is visualized by the color of the point, red for 0 and blue for 1. On the right, we have the three dimensional surface of (x_1,x_2)\mapsto \mathbb{P}[Y=1|X_1=x_1,X_2=x_2]. The probability is on the z-axis: the higher, the more likely the point will be blue. It is a linear model because level curves of the surface, corresponding to fixed probabilities, are straight lines.

So we have a model. How do we estimate it? Since we have a probabilistic model, we can use maximum likelihood, with the classical first order condition, \boldsymbol{X}^\top(\boldsymbol{y}-\widehat{\boldsymbol{p}})=\boldsymbol{0}. Which is a very classical equation we will have in all linear models (at least, with the canonical link function). It was also the case for our previous linear Gaussian model, actually. Anyway, it is a nice and simple linear equation… but not linear in \widehat{\boldsymbol{\beta}}…  So we cannot really solve it, but we’ll get back to that point in a second…

Interestingly, that equation, derived from our probabilistic model, and the maximum likelihood problem, leads to very interesting properties… like that one. The very first condition (obtained on the intercept) allows us to write that the sum of the observations is equal to the sum of the predictions… This has a direct interpretation in insurance, if premiums are proportional to the probability to claim a loss: the sum of the premiums will be equal to the sum of the losses. Of course, on the training dataset, but still…

Another important properties is that, because we use maximum likelihood to estimate the unknown parameter, we have classical properties of maximum likelihood estimators, such as asymptotic normality, and then, because of the delta method, we can derive asymptotic normality on anything, including on the probabilistic prediction…

But even that very classical econometric model strongly relies on computers. Since we do not have an analytical expression for \widehat{\boldsymbol{\beta}}, we already strongly rely, here, on computers, and optimization routines. The classical tool is (for minimization) based on gradient descent. Because we want to find the zeros of the gradient of the log likelihood (up to a minus sign), we need to compute the gradient for the direction, and the Hessian for the intensity. In classical optimizers, those two are approximated, but in the case of linear models, those two are very easy to compute, using simple linear algebra. So the routine usually converges in less than a dozen iterations… So keep in mind that, as soon as we leave the least squares, econometricians usually need machines to estimate various quantities. And the probabilistic framework gives us interpretations for various quantities. For example, the Hessian (that is computed during the process) is related to Fisher information, and to the asymptotic variance of the estimated parameter…

Something we can observe also is that econometricians usually want a good model, with high accuracy (maximizing the likelihood could be seen as a good objective), but usually, they also seek simplicity. That is the parsimony principle of econometricians. We can already mention that it can somehow be added in the objective function. Simplicity can be the number of non-null parameters in the models. That will be the first model, called sparse. But it is a complicated optimization problem. Two other two are the Ridge and the Lasso regression.

The Ridge regression was introduced by econometricians in the 70s actually to simplify the computation of the inverse of a matrix (and avoid numerical instability). It is not really related to our problem of sparcity… but interestingly, in the case of the Gaussian model, it has an explicit solution (that can be interpreted with a Bayesian perspective)

What is nice here is that, if we simply change the shape of the constraint, if we consider, instead of balls (\ell_2 balls) we use boxes, (\ell_1 balls) we have variables selection since usually end on edges of the box… But somehow, this is still in the toolbox of econometricians because the structure of the model is the same, we simply change the way we estimate the parameter.

Even if I claimed in the introduction that machine learning was about optimization, that’s also the case with statistics and econometrics. And numerical computation. Just to highlight that point. Let us get back to our first order condition. When we run a logistic regression on a computer, basically, the only thing the computer does is to solve that equation numerically.  It is still possible to solve that equation if y is equal to 0.2. The Bernoulli model is the underlying framework, that’s how we got that equation… that’s just the story we tell ourselves to motivate this equation… As claimed by Ariel Rubinstein, econometrics has a lot to do with telling ourselves stories…

Because we can have the same model with a very different story. For example Fisher’s Linear Discriminant Analysis. Suppose that \boldsymbol{X} in the two groups, have Gaussian distributions. More precisely, Gaussian distributions with the same variance… in that case, if we use Bayes formula to get the probability of {Y} conditional on \boldsymbol{X}… we have a logistic score…

The geometric interpretation is the following. The points in red have a Gaussian distribution and so are the points in blue… in that case, the probability to be equal to 1, for Y, is the same as the one from our logistic regression. The only difference is how we get our parameters. Here, we plug moment estimators from our Gaussian distributions…

Very quickly, we can also consider the so-called support vector machine. In a nutshell, we want to get a line that separates blue and red points, as in that picture. In most cases, it’s not possible, but here, it is possible. Actually, there is an infinite number of straight lines that separate the blue and the red points. The idea with SVM is to select the line that is the most far away to the closest point. For safety reasons… We end up with that optimization problem

And in the general case where the red and blue points are mixed? We simply add a penalty, a price to be wrongly classified… The difference here is that there is no probability to be red or blue. We split the space in two, one area where points are predicted to be red, and one where points are predicted to be blue…

And 25 years ago, John Platt suggested using the distance to the line to derive some quantity that could be interpreted as a probability… he suggested fitting a sigmoid after the SVM… and ended with a logistic model, too…

What about neural networks? First, observe that a generalized linear model, such as the logistic regression is a neural network, with no hidden layer. The covariates are our input neurons, and the dependent variable y is the output neuron. We consider a weighted linear combination of the covariate, and if we use a logistic function as the activation function… we get the logistic regression. Depending on the objective function, we can get different weights, but the general structure here is the same as the logistic regression.

Now, popular (feedforward) neural networks are more complicated architectures, with hidden layers of neurones, with linear combinations, and various activation functions….

but if we focus only on the very last layer… we have a logistic regression, again. Ok, not on the original covariates. But we can still claim that we have a logistic regression after some feature engineering (somehow optimized ex-post).

Actually, the first model that does not look like a logistic regression would be the classification tree, where we are separating, sequentially and iteratively the space of covariates. But let me go fast on that one, because I will not have much time…

Just to illustrate… I did generate some data, based on our previous model: I drew x_1 and x_2 independent, uniformly on the unit interval. Then, I generate Bernoulli variables with the following probability. It is almost a standard linear logistic regression, I just added a joint interaction and some non linearities. But the first order approximation of a linear model makes sense.

Now I fit all the models we’ve seen. The logistics, without and with penalties, that lead to linear models, with linear level curves,

the discriminant analysis, also linear. Then a logistic when cuting (somehow optimal) features into categories. And two classification trees, deeper at the bottom.

Again, a linear model with a SVM on the top left, then two random forests, that are ensemble method (I will get back on those in a second)

and finally, two additive models, that are econometric models non nonparametric components, and actually, do a nice job…

Just to get back on ensemble methods, the idea is that possible, we can see a model as an aggregation of different models.

We can learn models in parallel, and then aggregate, that is the idea of bagging, and random forest, where we create multiple trees, and then average them. But we can also learn models sequentially, in series… we fit a simplistic model, then model the errors, then model the new errors, and so on…

Here is the random forest, where we aggregate 5 trees on the top left, 1000 on the bottom right. The aggregation is the classical one used in classification, it is called the majority rule. Clearly, it is a poor estimate of the true probability…

This one is a regression tree, we average probabilities in each leaf. It is much better, but the price is that clearly, this produces a lot of variability,

That sequential learning here is based on boosting. As below. We start with a simplistic model, super flat, with 50% almost everywhere, and then, iteratively, learn from our errors… We converge towards the true probabilities… slowly…

It was a bit long, but those are the classical regression models, and what we could have in mind when we talk about machine learning.

  • The concepts

Now, let us discuss important concepts to get a better understanding of both, econometrics and machine learning.

First, I wanted to talk about validation, cross validation, and generalization. The starting point of my story is that all statistics and econometrics courses spend a lot of time on unbiased estimators. Somehow, it makes sense, of course. An unbiased estimator \widehat{\boldsymbol{\theta}} satisfies \mathbb{E}(\widehat{\boldsymbol{\theta}})=\boldsymbol{\theta}, so “on average” it hits the true parameter value. This gives a clear interpretable guarantee: if you repeated your experiment many times, the average of your estimates would converge to the truth. When bias is zero, constructing confidence intervals and hypotheses tests often becomes simpler. We don’t have to correct for systematic offsets, and standard formulas for variance and coverage apply more directly. In the context of point estimation, many optimality theorems start by seeking the “best” unbiased estimator (or at least asymptotically unbiased). This framing yields neat characterizations of efficiency (minimum variance among unbiased estimators). That’s Cramér-Rao lower bound. Bias–variance trade-offs mean that a slightly biased estimator can have much lower mean squared error (MSE) than any unbiased one. In machine learning, there is no reason to suppose that we can get unbiased estimators, so we should focus on the MSE.

And as we will see, a classical tool to minimize that MSE is to use cross-validation, especially when we need to optimize for meta-parameters. Those meta-parameters are everywhere in machine learning, because of the flexibility of the models (depth of trees, number of trees in boosting approaches, architecture of a neural network, etc). To illustrate the difference between mathematical statistics (or econometrics) and machine learning, consider the simple case of the definition of neighborhood in local regression.

In regression problems, we want to estimate (as defined above) \mu(\boldsymbol{x})=\mathbb{E}(Y\mid\boldsymbol{X}=\boldsymbol{x}) where this conditioning on \{\boldsymbol{X}=\boldsymbol{x}\} should be understood as being in the neighborhood of \boldsymbol{x}. And this neighborhood will be defined based on a distance to \boldsymbol{x}, that should be smaller than h. And we want to find the “best” h.

To illustrate, consider the following simple regression example, with a single x and y. On the bottom right, we have a standard linear regression, but that might not be satisfactory

Instead of a global linear model, why not consider a local version? if the goal is to predict at some point x, why consider a global parametric model, influenced by observations quite far away? The regressogram, like the moving histogram when estimating a density, is based on the idea that we should only consider points in a close neighborhood of x. We have then a weighted sum of y‘s only for points close to x.

The choice of h has a clear influence on the prediction. Too small on the left, and it’s close to the data, with large variance, while too large, on the right, and it’s more regular, but far from the “local average”.

We can consider smoothed weights, with kernels, and still a h parameter describing the bandwidth.

It is smoother than before, but we have issues as before. Small h leads to small bias and large variance, large h leads to large bias and small variance

Instead of a weighted sum, consider a weighted least square problem, with weights function on the distance to x. And again, the choice of h has an influence on the regression curve

How should we find the optimal h? The classical mathematical answer is obviously “do the maths”: compute the bias and the variance, and get the h that minimizes this MSE (or an integrated version since we consider a function, and not a single parameter)

Those smoothing approaches are standard in econometrics, since they lead to a linear predictor.

In machine learning, the classical approach is the hold-out approach. We split the dataset in two. The first part is used to fit models, the second one is used to quantify accuracy metrics. For example below, we use this approach to see how many steps we need in the boosting model with iterative sequential learning. Obviously, on the training dataset, we do better and better… but at some point, we no longer generalize. The model starts to model the residual noise. That is what we see on the curves below. On the lower corner. The red curve is the MSE on the training dataset. The more trees, the lower the MSE. In blue, the one the MSE on the validation dataset, with a classical convex curve. Too small, we underfit, the model is too simple, and far away from reality. Too large, we overfit.

And this approach can be improved. Instead of the hold-out approach, where we tend to lose information, we can consider cross validation. For example, we remove one observation out of n. and fit the model on the remaining n-1.

A more clever strategy could be to consider blocks, like remove 10% of the observations, use the 90% to fit, and predict on the 10% not used to fit. Or use bootstrap. On average, 37% of the points are not used in the boostrap sample. Well, why not use them as a validation dataset…? Of course, those techniques are purely computational, but they might help to optimize for parameters we don’t want to estimate (but we still want to select in an optimal manner).

Before the end, our last concept is related to probabilistic scores and the idea of calibration. Long story short, can we interpret a score between 0 and 1, returned by a model, an algorithm, as a probability?

First, we must acknowledge that is usually not really a question asked by econometricians. A score is simply a sufficient statistics, a summary of all information into a quantity, usually in the unit interval. In our logistic regression, we had a score that had a direct probabilistic interpretation, since it was the probability that our dependent variable y is equal to 1. But actually, the square of the score, or the square-root of the score, could also be seen as a score. But maybe less interesting (even if we could talk for hours about that, because the square of the score returns a score with exactly the same accuracy as the original score, since we have the same ROC curves, the same AUC, etc). So here, we talk about something else. First recall that an important concept, even in classification, is the regression function, that \mu function,

As mentioned previously, a natural property for a score could be the balance property: the average value of the score is equal to the average value of the regression function and of the y variable. But we could ask for a local version of such a property… on any subgroup, we want this balance… Unfortunately, it is too complicated, especially in high dimensions. So we could consider a local version, but conditional on the predicted score…

Conditional on the predicted score, the expected value of y should be equal to the score. This property is extremely important from an epistemological perspective, because it allows us to actually interpret a score as a probability. If I say that the probability of rain, for tomorrow, is 40%, what could you say about my prediction if it rains? or if it does not rain. One shot probabilities, related to a single event are impossible to interpret because we cannot use the law of large numbers, that bridges empirical frequencies and probabilities. The only simple interpretation is actually the following: if we consider all days where a 40% chance of rain was predicted, 40% of those days, it should have been raining. No more, no less. That is precisely the way we can interpret probabilities in a non probabilistic framework (or let’s say without a Bayesian interpretation). Thus a natural tool is this reliability diagram, introduced in the 60s

That is the one you can get with skitlearn. We have 4 models, a logistic regression , a random forest, a SVM, with distributions of scores at the bottom right, and calibration curves on top. Created using quantile bins. On the left, we take the bottom 10% of all scores, for a given model, and average them. That gives the x. On the y, we take the average of the y‘s.

It is actually possible to use a local regression, since we’re back on a univariate problem, with a regression on the predicted scores, that returns a smooth version of that g function

or what is called an isotonic regression, which is a piecewise linear increasing function, introduced in the 60’s, because that g function has to be increasing…

Just to illustrate… consider a simple classification problem, in insurance. In motor insurance more precisely. We have the annual claim frequency in motor insurance, and four models: a plain logistic, a gam (logistic additive smooth regression), a boosting model, and a random forrest. They are all on average well-balanced, with an average of 8.8% claim per policyholder, consistent with the data. But some variability on distribution of the score ! 10% of the scores with a random forest are below 0.06% ! with an average cost of 3,000 euros, we get a 20 euro premium ! and 10% above 40%, corresponding to a 1,200 euro premium.

More precisely, here are the distributions of the scores. Machine learning ones are quite different

But more intesting, here are the calibration curves. If models were well calibrated, we should be on the first diagonal… and we are quite far away… Machine learning models are usually poorly calibrated.

Before we conclude, we can play a game. Usually, when we generate data, we use a simple model, usually with econometric foundations, and we what we can do… but what if we generate data based on the models we just fitted….?

Here, we generate data based on our logistic model. We fit our logistic models on the data we have, then, for each observation, we have a predicted probability. We use that probability to draw a 0/1 variable. And we fit, again, our four models. We can visualize the distributions on top, very close to the previous ones, and then we have different indicators. The first two are the AUC, and accuracy metric… the higher, the more accurate. Clearly, econometric models perform well on data generated using a logistic stochastic representation. Again, a lot of dispersion of random forest predictions, with the interquantile spread, on the right

Then we have the calibration curves, a calibration metric (the smaller, the better the calibration), and on the right, the distance between estimated scores, and real scores, used to generate data. Random forest is not good, at all. Boosting is not great, but not as bad.

Then we generate using our gam predictions. Same story,

With poor calibration of machine learning models

Now, what if we use unusual models, like the scores obtained after boosting. Again, accuracy of regression models is not bad, surprisingly, a fitted boosting model performs better

and interestingly, now the boosting model outperforms the regression models, in terms of calibration, even if the distribution of predictions is not that good…

Now, if we generate data based on our random forest scores, we can see that the random forest model has a very high AUC,

and if calibration is better, much better, it is still quite bad…

Finally, we can wrap up

Here is a list of references


OpenEdition suggests that you cite this post as follows:
Arthur Charpentier (June 17, 2025). SCOR Foundation for Science Webinar, ML and Econometrics. Freakonometrics. Retrieved July 19, 2025 from https://doi.org/10.58079/1455a


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.