Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc

There were a lot of posts, recently, related to those topics, starting with Noah Smith ‘s piece entitled “Economics has a Math Problem” and more recently “Econometrics, Math, and Machine Learning…what?” by Matt Bogard. I don’t have (yet) a clear mind on those issues, but there are still a few thoughts that I wanted to share. I did not really want to, but I’ve been asked, on Twitter, and I thought it might be good to write them down, to clarify some ideas I have, but also (probably, hopefully) to get interesting feedbacks.

About the general context, I am all the more interested by those questions that I usually have them in interviews. I have a PhD in applied mathematics (on statistical models, but to be honest, it was more on extreme value theory and copula, so basically, all I did was more applied probability than statistics). Then I got a position in an Economics departement in France, and then in a Mathematics department, in Canada. Each time people ask me if I am a statitician. Or an economist. Or an actuary…. Furthermore, I’ve been teaching statistics and econometrics based on standard textbooks, but I have always frustrated about (strong) limitations of standard results. Like, what should we do if residuals are not Gaussian, and if we have a small number of observations. Like, what should be do if the relationship between the dependent variable and some covariates is clearly not linear… Most of my courses had a focus on computational aspects, yes. And more recently, I have been doing also some training for actuaries on “data science” (an important buzzword in the industry, but let’s call that “freakonometrics“), and a lot of them expected an emphasis on machine learning techniques. They had the feeling that consultant selling machine learning techniques could outperform their actuarial models, usually based on standard econometric techniques. And they wanted to understand what was really behind those words like ‘random forrest‘, ‘SVM‘, ‘neural networks‘, ‘gradient boosting‘, etc. So I have been working on those topics recently, even if I still don’t claim any kind of expertise. But I am curious, and I have been reading a lot of things, asking a lot of questions to colleagues, so maybe I can share some thoughts here.

  • Machine learning vs. mathematical statistics

In machine learning, we have a dataset, i.e. a collection of observations , and some loss function, . And the goal is to get a function , defined on the same space as the ‘s, taking values in the same space as the ‘s, which could be a good predictor for the ‘s in the sense that we should solve

for some functional space  (which somehow describes the kind of predictor you’re looking for, like linear functions, polynomial functions, stepwise functions, etc). A popular case is the quadratic loss function, but we’ll get back on that point soon… And the problem should probably be formalized differently, because the function should be defined on a training sample, but the loss should be minimal on a testing sample.  Anyway, in machine learning, we want to minimize a loss, based on data. Everything is about data, and a loss function.

In mathematical statistics, we have a dataset, i.e. a collection of observations ,. The first step is to assume that there is a probability space  such that observations can be seen as realisation of i.i.d. random variables . More specifically, here, we assume that conditionnally on ‘s, ‘s are i.i.d. with distribution . Then we solve something like

which is a maximum likelihood problem, i.e.

Here we assume that there is simple relationship between parameter  in the parametric family, and the mean, , like, for some (known) functional . Then, another assumption is that we believe that the mean can be a good ‘predictor’. So here, we set

A popular example is probably the Gaussian distribution, where the mean is the first component of the parameter. But here again, we’ll get back on that point soon.

To summarize, the main difference I see here is that machine learning starts in a non-probabilistic world. There is no assumption of a random sample, of a possible causal relationship (implicity admitted with the conditional distribution mentioned above). That’s why in mathematical statistics we can get confidence intervals (it is possible to say that with some probability, the predicted values is bounded by some quantities since there is, underneath, a probabilistic model). And we should not get a confidence interval in a machine learning context. But that’s possible to get a probabilitic interpretation of most techniques, as explained by Kevin Murphy in Machine Learning: a Probabilistic Perspective.

  • The paradox of econometrics

The paradox of econometrics is that it is taught as a machine learning problem. All econometric course starts with Gauss-Markov theorem. The OLS estimator is defined based on a (quadratic) loss function, not based on some distribution on the residuals. We want to solve the following problem, that is the starting point of econometrics,

This is a machine learning formulation, with a quadratic loss function. We like that formulation because of the probabilistic interpretation we can get,

or, if we consider the empirical formulation

OLS are related with the expected value. Which – again – is a natural predictor. But that is not a statistical way of defining econometrics (the way I described in the previous section). If we want to do it properly, we should go for a GLM approach. To be more specific, we assume that we have i.i.d. observations (i.i.d. given the covariates), and that

Then we end_up solving the same problem as before, namely

but here, we have a statistical definition of the econometric problem. Of course the two are related, since from our maximum-likelihood approach is translated, from a computational point of view, as solving the first order condition of an optimization problem. From a computational point of view, there is no probabilitistic distribution: we can use a Poisson regression even when the dependent variable is not an integer! So obviously, econometric models can not be described only as mathematical statistics ones.

  • On nonparametric econometrics, or the meaning of ‘linear’

While I was giving my econometrics class a few years ago, I usually starting with a simple example. In economc theory, one can find “relationships” between agregated variables. For instance, with Philips curve, we want to visualize the relationship between inflation and unemployment. An economic theory might tell you that if unemployment decreases, inflation increases. From that economic theory, we might try to fit an econometric model, describe that “relationship” But usually, econometric models are based on a linear relationship. This is no a priori reason to have a linear relationship. I mean, even if you look at data, I challenge you to find something linear here

The Cowles Commission, which initiated Econometrics (by founding the Econometric Society, and the prestigious journal Econometrica) had the postulate that an econometric model should be based on an economic model. That’s how you get SEM (Structural Equation Model) like Klein model, in the 1950’s,

That is a standard (linear) econometric model, with parameters  and , and some linear relationship amoung them. Nonparametric econometric models started, assuming that models might be non linear.

I my econometric classes, I was introducing splines, but also local regression. Which is very natural when you think about it : if the goal is to get a good estimation for , shouldn’t you look in the neighbourhood of . We do not want necessarily a good global model, but a model good in that neighbourhood.

With nonparametric models, we start to have numerical problems, since problems are not as simple as linear ones. So the first goal is to get an efficient algorithm to solve it. So here we start to have connexions with machine learning. The main difference I see is simple. In econometrics, seen as a mathematical statistics problem, we seek asymptotic results, nice probabilistic interpretations. In econometrics, seen as a machine learning problem, we focus more on the algorithm. We do not care about the output or the interpretability, we want a good algorithm. See for instance gradient boosting techniques against spline regression (in a recent post)

The blue line is a simple (linear) spline regression model, and the red line is a boosted spline algorithm (it is a stepwise procedure). The blue line is simple, easy to understand. The red line, after 200 iterations is a sum of 200 functions. It is much more difficult to interpret. But who cares ?

  • On meta (or tuning) parameters

The other difference between machine learning and statistics (in the context of nonparametric models) is the choice of the tuning parameters (or meta-parameters) that show-up. For instance the bandwith in the context of kernel regression (which is the size of the neighborhood in local regression) or kernel density estimation.

In traditional econometrics, we use plug-in methods. For instance Silverman’s rule of thumb

This was obtained under the assumption that observations were normaly distributed. A lot of rule-of-thumb techniques have been obtained under the assumption of a normal distribution. In machine learning, again, we just use data, we do not need a probabilistic model. It is more natural to use cross-validation techniques. That is the standard way of getting that optimal bandwidth parameter.

  • Boostrap

Bootstrap is an interesting tool that appears both in statistics and in machine learning. I am a real fan of bootstrap techniques. I usually see it as a non-parametric monte carlo algorithm. We use the inverse transform sampling technique, with the empirical cumulative distribution function. Use  where

Monte carlo techniques can be used in a probabilistic space, so it is natural to use them in mathematical statistics. I use them a lot in my courses when I define statistical tests : tests are based on statistics that have a known distributions when  is true. But “known” is a very general word. I mean, if we can draw from that distribution, that’s enough, right. So monte carlo techniques are important tools in mathematical statistics. An necessary in econometrics. In most textbooks, results in econometrics either rely on the Gaussian assumption (that is hardly satisfied in real life), or on asymptotic results (and then you need a large number of observations). With a small number of observations, boostrap is necessary.

But bootstrap is also natural in machine learning. For instance bagging. It is claimed to be a machine learning technique, “bagging, is a machine learning ensemble meta-algorithm designed to improve the stability and accuracy of machine learning algorithms” (as explained on Wikipedia). Actually, that is what we use if we use bootstrap in econometrics: we generate possible samples by resampling, and then you aggregate the predictions. Simple isn’t it.

  • To go further

I know, all of that is way too simplistic. But clearly, machine learning, statistics and econometrics will live together in the future, and there will be a lot of overlaps. They will learn from each other. Probabilistic interpretation of machine learning tools will be more important, but so will be computational aspects of statistics. But it started a long time ago, with nonparametric approaches, and the branch of computational statistics. And a lot of interesting posts and articles have been published on those topics, starting with “Statistical Modeling: The Two Cultures” by Leo Breiman.



Cite this blog post
Arthur Charpentier (2015, September 10). Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc. Freakonometrics. Retrieved March 19, 2024, from https://doi.org/10.58079/ov0e

8 thoughts on “Some thoughts on Economics, Mathematics, Econometrics, Statistics, Machine Learning, etc”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.