Will Rogers ou la magie des moyennes

Pour reprendre un exemple emprunté à@mathematicsprof, il existe un paradoxe intéressant (mais assez simpliste) sur les moyennes, appelé phénomène de Will Rogers. Le point de départ est une déclaration de Will Rogers, qui avait affirmé que “in the 1930’s, there was a mass migration of people from Oklahoma to California. As a result, the average I.Q. of both states…..increased”. Le paradoxe est connu en médecine sous le nom de “stage migration”, et se trouve mentionné dans bon nombre d’études académiques. Comment cela est-il possible ? En fait c’est assez simple… Considérons les deux groupes suivants

(1,2,3,4,5)(6,7,8,9,10)

Les moyennes sont respectivement de 3 et 8. Maintenant, si le 6 quitte le second groupe pour le premier,

(1,2,3,4,5,6)(7,8,9,10)

les moyennes deviennent alors 3,5 et 8,5. Formellement, si on considère deux groupes http://freakonometrics.blog.free.fr/public/perso2/will-1.gif et http://freakonometrics.blog.free.fr/public/perso2/will-2.gif, tels que http://freakonometrics.blog.free.fr/public/perso2/will-3.gif, si on transfère http://freakonometrics.blog.free.fr/public/perso2/will-6.gif, on détériore la moyenne des deux groupes, alors que si on transfère http://freakonometrics.blog.free.fr/public/perso2/will-8.gif, on améliore les deux….

Étonnant non (bon, pas tant que ça quand on voit la solution) ?

Pour une tarification de l’assurance automobile à l’aide du tour de poitrine !

Plusieurs sites spécialisés en assurance commencent à évoquer un arrêté probable de la cour européenne sur la discrimination en assurance (par exemple ici ou ). Une des bases (économiques) de l’assurance est le principe d’Akerlof qui pousse les assureurs à segmenter par classe de risque. Afin de segmenter, et de révéler les classes de risques, on utilise l’historique de sinistralité (information dite a posteriori), ou bien des informations exogènes (dites a priori) sur le conducteur, le véhicule, son usage, etc. Par exemple on peut utiliser l’ancienneté du véhicule, et le nombre de kilomètre effectués (en moyenne) par le conducteur, comme sur le graphique ci-dessous (retrouvé dans les transparents que l’on utilisait avec François Bucchini quand on donnait le cours d’assurance dommage à l’ENSAE, les probas étant “normalisées” dans une espère de base 100)

ou encore le type de carburant utilisé (diesel ou essence)

On retrouve que plus on conduit, plus la probabilité d’avoir un accident augmente, mais le carburant et l’âge du véhicule semblent être aussi des variables discriminantes. Et parmi les variables qui semblent significatives (pour expliquer la probabilité d’avoir un accident), il y a le sexe (croisé ici avec le kilométrage, comme auparavant),

Alors l’effet peut sembler marginal sur ce graphique… mais c’est loin d’être le cas. Par exemple, sans utiliser des techniques très poussées en économétrie, on peut regarder le nombre moyen de sinistres, et le coût moyen de sinistres, par sexe, et par tranche d’âge (voire aussi par CSP et par puissance du véhicule). Dans une étude faite par un assureur, j’avais trouvé les chiffres suivants

En haut à droite (beaucoup d’accidents, et coût – en moyenne – élevé) on retrouve les jeunes hommes. Donc oui, les jeunes hommes sont significativement beaucoup plus risqués que les autres conducteurs. Et le soucis est que, si on ne segmente pas, Georges Akerlof nous explique que le marché de l’assurance disparait, les “bons” risques ne voulant plus payer pour les “mauvais” risques. Sans pour autant rentrer dans une spirale infernale de la segmentation, il est bon que les primes restent corrélées au risque sous-jacent.

Les assureurs prétendent qu’ils ne «ne font pas de la discrimination, ils font de la différenciation ». Je ne rentrerais pas sur les débats de terminologie (pas aujourd’hui en tous les cas), mais le but n’est pas de trouver des variables “explicatives” de la sinistralité au sens causal (malgré la terminologie usuelle des économètres) mais de trouver des variables “corrélées” avec une forte sinistralité, et de les utiliser pour segmenter. Les assureurs européen avaient, jusqu’alors, bénéficié d’un sursis dans le calcul des primes qui leur permet de pratiquer des tarifs différents « lorsque le sexe est un facteur déterminant dans l’évaluation des risques».

Dans un cours d’analyse de données, j’avais montré (ici) qu’à partir des notes de étudiant(e)s à différents examens, je pouvais prédire le sexe des étudiants. Bon, l’étude avait été faite rapidement, avec un petit jeu de données (et donc sans population d’apprentissage et de test), mais il est facile de trouver des variables permettant de deviner le sexe d’un conducteur. D’aucuns pourraient être tentés d’utiliser la pointure des chaussures, mais personnellement je préférerais le tour de poitrine, ou un tour de poitrine ramené à un tour de hanche. Je suis presque sûr qu’avec de telles observations, on peut avoir des variables fortement corrélées avec la survenance d’accident ! En tous les cas ça promet un peu d’animation chez les agents d’assurance ! voire chez les chirurgiens esthétiques (retirer les implants mammaires pour faire baisser sa prime d’assurance auto, voilà qui est original) !

when Nuns or Hells Angels get in a plane

Today, at lunch, Matthieu told us a nice story (or call it a paradox if you like) about the probability to find you seat empty when you get in a place. 

  • a plane full of nuns

Assume that you are in the line to get in the airplane, you are the 100th in the line. The first one is scatter brained, he has his head in the clouds, and when he get in the airplane, he cannot remember where he should seat. His strategy is then extremely simple: he seats randomly in the plane. So he picks up randomly a seat, and he waits.

Then come 98 nuns (one by one). And nuns are extremely polite: if there is someone in their seat (the one that is on the ticket they have) then they do not complain, and pick up another seat randomly (among those available, of course). Then you arrive. The question is simple: what is the probability that someone is seated at your seat ?

Any idea…?

Maybe I should give more time to do the maths… and tell another story…

  • a plane full of Hells Angels

Consider almost the same problem as the one mentioned above. Except that now, it is not 98 nuns that are getting in the plane, but 98 Hells Angels. So the problem here is that Hells Angels are slightly less polite than nuns. When they find someone seating on the seat they should have, they do not shyly move to another seat, but they grunt and then our scatter brained man (who is actually seating in their seat) has to move somewhere else. And the question is the same: you are the 100th person to get in the plane, what is the probability that someone is seated at your seat ?Any idea….?

The important point is that the problem is exactly the same (at least from a mathematical point of view, maybe not for the stewardess, or from the guy who enter first in the plane). The point is that, at each time, there could be only one person (or less) seating in a seat which is not his or hers (in the sense that if we compare the list of the passenger at any time, and the list of seats taken, there should be only one – or less – difference). The difference in the two story is that in the first case, it will be a nun, while in the second one, it will be our shy guy.

  • Let us run simulations

If we do not see how to get that probability analytically, let us run some R code,

> set.seed(1)
> n=100; TEST=rep(NA,100000)
> for(s in 1:100000){
+ OCCUPIED=rep(FALSE,n)
+ OCCUPIED[sample(1:n,size=1)]=TRUE
+ for(j in 2:(n-1)){
+ FREE=which(OCCUPIED==FALSE)
+ if(OCCUPIED[j]==TRUE){OCCUPIED[sample(FREE,size=1)]=TRUE}
+ if(OCCUPIED[j]==FALSE){OCCUPIED[j]=TRUE}
+ }
+ TEST[s]=OCCUPIED[n]==TRUE
+ }
> mean(TEST)
[1] 0.49878

Here, we clearly see that the problem is the same (either with nuns or Hells Angels): we do not care about who will change his/her seat, but we just look at seats that are available… So the program is valid for the two problems (and the solution will then be the same). Another point is that the probability looks extremely simple: one over two !

  • an analytical expression

Consider the Hells Angels problem (for notations). Let http://freakonometrics.blog.free.fr/public/perso2/nonnes1.gif denote the probability that, at time http://freakonometrics.blog.free.fr/public/perso2/nonne6.gif, our shy guy is sitting in my seat. When he gets in the plane, the probability that he gets to my seat is

http://freakonometrics.blog.free.fr/public/perso2/nonne2.gif
Then, the probability that, after ith passenger’s entrance, our guy is sitting in my own seat is (since the initial proof was not correct, I remove it, see below for a nice proof) One can get that

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-01.gif
So, we can get the probability that, when I get in, our guy is sitting in my own seat as
http://freakonometrics.blog.free.fr/public/perso2/avion-ec-07.gif

http://freakonometrics.blog.free.fr/public/perso2/avion-ec-08.gif

Hence, there is one chance out of two that my seat will be free… (which is what we got with Monte Carlo simulations).

But a faster proof is to observe that, in the Hells Angels case, our guy will be kicked out until he reaches either his seat, or mine. Since those two events are equiprobable, there is one chance out of two that he seats in my seat (and since no Hells Angel will seat in mine, only this first guy can). So the probability that someone is in my seat when I get in is one half.

Nice isn’t it ? And thanks Matthieu for the problem  (with his friend Claude’s solution with the Hells Angels, and Olivier and Renaud for their comments) !

More and more natural catastrophes…

A few weeks ago (here, or there, among many others) I discussed shortly the increase of the frequency of natural catastrophes. An example is perhaps Australia, with major droughts over the past 10 years; a summer heat wave in Victoria, Australia, caused the massive bushfire in 2009; the flooding in Queensland this winter, as well as a huge cyclone. So it looks like Australia is experiencing that increase of the frequency of natural catastrophes.

Nature published last week a series of interesting papers on natural catastrophes (and its relation with a human factor). Increased flood risk linked to global warming by Quirin Schiermeier (likelihood of extreme rainfall may have been doubled by rising greenhouse-gas levels);Climate change: Human influence on rainfall by Richard P. Allan including some Letter by Min and another Letter by Pall; Human contribution to more-intense precipitation extremes by Seung-Ki Min,et al; and Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000 by Pardeep Pall et al.

 

Talk at Desjardins General Insurance

This afternoon, I will give a talk at the seminar of the R&D department at Desjardins General Insurance, on correlation in claims reserving. A lot of interesting papers have been published recently on that topic. On multivariate Chain Ladder, some interesting articles have been published, e.g. the one by Carsten Prohl and Klaus Schmidt (here) or the one by Michael Merz and Mario Wuthrich (there).

But I think another interesting perspective (so far, not in claims reserving, but one should find some time to look at it) should be about multivariate regression (multivariate GLM’s), e.g.

All that will be mentioned in the talk. Slides can be downloaded here,

The dataset used in the example can be obtained with the code below

> P.corp=read.table("http://freakonometrics.blog.free.fr/public/data/auto-corporel.csv", +        header=FALSE,sep=";",na.strings = "NA",dec=",") > P.corp=as.matrix(P.corp) > n=nrow(P.corp) > P.mat =read.table("http://freakonometrics.blog.free.fr/public/data/auto-materiel.csv", +        header=FALSE,sep=";",na.strings = "NA",dec=",") > P.mat=as.matrix(P.mat) > P.mat=P.mat[1:n,1:n] >  P.mat = P.mat[2:10,1:9] >  P.corp= P.corp[2:10,1:9] > n=9 > P.tot = P.mat + P.corp

Statistiques, de la théorie à la pratique (partie 2)

Il y a une quinzaine de jours (ici) on avait vu comment construire des estimateurs pour l’indice de queue de la loi de Pareto. On avait 4 valeurs (différentes). Maintenant, on peut se demander lequel devrait être le “meilleur“.

  • propriétés de l’estimateur du maximum de vraisemblance

Maintenant que l’on a un peu avancé dans le cours, on peut continuer l’étude de la distribution des couts de sinistres (étude commencée ici).

http://freakonometrics.blog.free.fr/public/perso2/pareto-100.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso2/pareto-101.gif

de telle sorte que

http://freakonometrics.blog.free.fr/public/perso2/pareto-102.gif

et

http://freakonometrics.blog.free.fr/public/perso2/pareto-204.gif

Aussi l’information de Fisher est http://freakonometrics.blog.free.fr/public/perso2/pareto-203.gif.
La variance aymptotique de http://freakonometrics.blog.free.fr/public/perso2/pareto-205.gif est alors

http://freakonometrics.blog.free.fr/public/perso2/pareto-206.gif
  • propriétés de l’estimateur de la méthode des moments

Nous avions noté que

http://freakonometrics.blog.free.fr/public/perso2/pareto-207.gif

i.e. http://freakonometrics.blog.free.fr/public/perso2/pareto-208.gif, alors http://freakonometrics.blog.free.fr/public/perso2/pareto-209.gif. Aussi, comme le théorème central limite limite

http://freakonometrics.blog.free.fr/public/perso2/pareto-210.gif

http://freakonometrics.blog.free.fr/public/perso2/pareto-211.gif

la delta method permet d’écrire, en posant http://freakonometrics.blog.free.fr/public/perso2/pareto-311.gif,

http://freakonometrics.blog.free.fr/public/perso2/correc-latex-pareto.gif

i.e.

http://freakonometrics.blog.free.fr/public/perso2/correc-latex-pareto2.gif

La variance asymptotique est alors

http://freakonometrics.blog.free.fr/public/perso2/pareto-310.gif
  • propriétés de l’estimateur de la méthode de la médiane

Pour le dernier estimateur, le théorème de Glivenko Cantelli sur les fonctions quantiles permet d’écrire

http://freakonometrics.blog.free.fr/public/perso2/pareto-309.gif

i.e. pour la médiane,

http://freakonometrics.blog.free.fr/public/perso2/pareto-308.gif

où la variance asymptotique (pour la médiane) peut se simplifier sous la forme

http://freakonometrics.blog.free.fr/public/perso2/pareto-307.gif

Là encore, en utilisant la delta method (je peux renvoyer ici pour une application numérique de cette méthode), avec

http://freakonometrics.blog.free.fr/public/perso2/pareto-306.gif

i.e. http://freakonometrics.blog.free.fr/public/perso2/pareto-305.gif, on en déduit que

http://freakonometrics.blog.free.fr/public/perso2/pareto-304.gif

La variance asymptotique de notre estimateur est alors (en enlevant le facteur http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gifau dénominateur – car il est commun à tous),

http://freakonometrics.blog.free.fr/public/perso2/pareto-303.gif

Pour trois de nos estimateurs (je n’ai pas fait le quatrième, ça serait hors programme), on a pu obtenir facilement des variances asymptotiques. Graphiquement, on a les évolutions suivantes pour les variances asymptotiques de nos trois estimateurs, en fonction de la valeur de http://freakonometrics.free.fr/blog/latex/information09.gif, avec le maximum de vraisemblance (en rouge), celui de la méthode des moments (en bleu) et celui basé sur la médiane (en mauve)

Manifestement le premier estimateur (le maximum de vraisemblance) a une variance asymptotique plus faible que les deux autres (mais on l’avait vu en cours, c’est ce que garantit l’inégalité de Cramér-Rao). Autrement dit, l’estimateur du maximum de vraisemblance est probablement le plus fiable….
En utilisant ces résultats mentionnés, on peut aussi construire des intervalles de confiance asymptotiques pour la valeur du paramètre http://freakonometrics.free.fr/blog/latex/information09.gif, à l’aide de nos trois estimateurs, i.e. pour un intervalle de confiance à 90%,

http://freakonometrics.blog.free.fr/public/perso2/pareto-302.gif

où les estimations des variances asymptotiques sont obtenues en substituant http://freakonometrics.blog.free.fr/public/perso2/pareto-205.gifà  http://freakonometrics.free.fr/blog/latex/information09.gif dans les formules au dessus.
Si on considère que l’estimateur du maximum de vraisemblance est le meilleur que l’on puisse utiliser, on obtient comme intervalle de confiance pour http://freakonometrics.free.fr/blog/latex/information09.gif,

2.845093-1.64/sqrt(100)*2.845093
[1] 2.378498
 
2.845093+1.64/sqrt(100)*2.845093
[1] 3.311688

L’étape d’après sera d’introduire des tests. On pourra ainsi se demander si l’on peut accepter l’hypothèse http://freakonometrics.blog.free.fr/public/perso2/pareto-301.gif. Mais c’est une autre histoire… à suivre donc….

Does the Student based confidence interval have any interest in practice ?

Friday in the course of statistics, we started the section on confidence interval, and like always, I got a bit confused with the degrees of freedom of the Student (should it be http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?) and which empirical variance (should we consider the one where we divide by http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif or the one with http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif ?).
And each time I start to get confused, the student obviously see it, and start to ask tricky questions… So let us make it clear now. The correct formula is the following: let

http://freakonometrics.blog.free.fr/public/perso2/IC-std-4.gif

then

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

is a confidence interval for the mean of a Gaussian i.i.d. sample.
But the important thing is neither the n-1 that appear as degrees of freedom nor the http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif that appear in the estimation of the standard error. Like always in mathematical result, the most important part of that result is not mentioned here: observations have to be i.i.d. and to be normally distributed. And not “almost” normally distributed….
Consider the following case: we have http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations that are almost normally distributed. Hence, I consider a student t distribution

n=20; X=rt(n,df=3)

An Anderson Darling normality test accepts a normal distribution in 2 cases out of 3.

for(s in 1:10000){
X=rt(n,df=3)
pv[s]=ad.test(X)$p.value
}
mean(pv>.05)
[1] 0.6799

With a true normal distribution if would be 95% of the cases, so in some sense, I can pretend that I generate almost normal samples.
For those samples, we can look at bounds of the 90% confidence interval for the mean, with three different formulas,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-1.gif

i.e. the correct one, or the one where I considered http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif degrees of freedom instead of http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-2.gif

and the one were we condired a Gaussian quantile instead of a Student t one,

http://freakonometrics.blog.free.fr/public/perso2/IC-std-3.gif

(and one might think to look at the non-unbiased estimator of the variance, also).
for(s in 1:10000){
X=rt(n,df=3)
m[s]=mean(X)
sd=sqrt(var(X))
IC1[s]=m[s]-qt(.95,df=n-1)*sd/sqrt(n)
IC2[s]=m[s]-qt(.95,df=n)*sd/sqrt(n)
IC3[s]=m[s]-qnorm(.95)*sd/sqrt(n)
}

One the graph below are plotted the distributions of the values obtained as lower bound of the 90% confidence interval,

(the curves with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif and http://freakonometrics.blog.free.fr/public/perso2/IC-std-5.gif degrees of freedom in quantiles are the same, here).
The dotted vertical line is the true lower bound of the 90%-confidence interval, given the true distribution (which was not a Gaussian one).
If I get back to the standard procedure in any statistical textbook, since the sample is almost Gaussian, the lower bound of the confidence interval should be (since we have a Student t distribution)

mean(IC1)
[1] -0.605381

instead of

mean(IC3)
[1] -0.5759391

(obtained with a Gaussian distribution instead of a Student one). Actually, both of them are quite different from the correct one which was

quantile(m,.05)
       5% 
-0.623578

As I mentioned in a previous post (here), an important issue is that if we do not know a parameter and substitute an estimator, there is usually a cost (which means usually that the confidence interval should be larger). And this is what we observe here. From a teacher’s point of view, it is an important issue that should be mentioned in statistical courses….

But another important point is also that confidence interval is valid only if the underlying distribution is Gaussian. And not almost Gaussian, but really a Gaussian one.  So since with http://freakonometrics.blog.free.fr/public/perso2/IC-std-6.gif=20 observations everything might look Gaussian, I was wondering what should be done in practice… Because in some sense, using a Student quantile based confidence interval on some almost Gaussian sample is as wrong as using a Gaussian quantile based confidence interval on some Gaussian sample…

In statistics, having too much information might not be a good thing

A common idea in statistics is that if we don’t know something, and we use anestimator of that something (instead of the true value) then there will be some additional uncertainty. For instance, consider a random sample, i.i.d., from a Gaussian distribution. Then, a confidence interval for the mean is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-06.gif

where http://freakonometrics.blog.free.fr/public/perso2/inc-out-8.gif is the quantile of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif of the standard normal distribution http://freakonometrics.blog.free.fr/public/perso2/inc-out-09.gif. But usually, standard deviation http://freakonometrics.blog.free.fr/public/perso2/inc-cout-10.gif (the something is was talking about earlier) is usually unknown. So we substitute an estimation of the standard deviation, e.g.

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-02.gif

and the cost we have to pay is that the new confidence interval is

http://freakonometrics.blog.free.fr/public/perso2/IC-cout-01.gif

where now http://freakonometrics.blog.free.fr/public/perso2/IC-cout-03.gif is the quantile of the Student distribution, of probability level http://freakonometrics.blog.free.fr/public/perso2/IC-cout-05.gif, with http://freakonometrics.blog.free.fr/public/perso2/IC-cout-04.gif degrees of freedom.
We call it a cost since the new confidence interval is now larger (the Student distribution has higher upper-quantiles than the Gaussian distribution).
So usually, if we substitute an estimation to the true value, there is a price to pay.
A few years ago, with Jean David Fermanian and Olivier Scaillet, we were writing a survey on copula density estimation (using kernels,  here). At the end, we wanted to add a small paragraph on the fact that we assumed that we wanted to fit a copula on a sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_11.gif i.i.d. with distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif, a copula, but in practice, we start from a samplehttp://freakonometrics.blog.free.fr/public/perso2/ic-cout_12.gif with joint distribution http://freakonometrics.blog.free.fr/public/perso2/ic-cour_14.gif (assumed to have continuous margins, and – unique – copula http://freakonometrics.blog.free.fr/public/perso2/ic-cout_13.gif). But since margins are usually unknown, there should be a price for not observing them.
To be more formal, in a perfect wold, we would consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-15.gif

but in the real world, we have to consider

http://freakonometrics.blog.free.fr/public/perso2/ic-cout-16.gif

where it is standard to consider ranks, i.e. http://freakonometrics.blog.free.fr/public/perso2/ic-cout_109.gif are empirical cumulative distribution functions.
My point is that when I ran simulations for the survey (the idea was more to give illustrations of several techniques of estimation, rather than proofs of technical theorems) we observed that the price to pay… was negative ! I.e. the variance of the estimator of the density (wherever on the unit square) was smaller on the pseudo sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout-17.gif than on perfect sample http://freakonometrics.blog.free.fr/public/perso2/ic-cout_18.gif.
By that time, we could not understand why we got that counter-intuitive result: even if we do know the true distribution, it is better not to use it, and to use instead a nonparametric estimator. Our interpretation was based on the discrepancy concept and was related to the latin hypercube construction:

With ranks, the data are more regular, and marginal distributions are exactlyuniform on the unit interval. So there is less variance.
This was our heuristic interpretation.
A couple of weeks ago, Christian Genest and Johan Segers proved that intuition in an article published in JMVA,

Well, we observed something for finite http://freakonometrics.blog.free.fr/public/maths/mariage01.png, but Christian and Johan obtained an analytical result. Hence, if we denote

http://freakonometrics.blog.free.fr/public/perso2/JSCG-1.gif

the empirical copula in the perfect world (with known margins) and

http://freakonometrics.blog.free.fr/public/perso2/JSCG-2.gif

the one constructed from the pseudo sample, they obtained that, everywhere

http://freakonometrics.blog.free.fr/public/perso2/JSCG-6.gif

with nice graphs of http://freakonometrics.blog.free.fr/public/perso2/JSCG-7.gif,

So I was very happy last week when Christian show me their results, to learn that our intuition was correct. Nevertheless, it is still a very counter-intuitive result…. If anyone has seen similar things, I’d be glad to hear about it !

What is the optimal strategy to marry the best one ?

Valentine’s day is a nice opportunity to post on hot and sexy topics… Well, it’s also an important day that I should not miss, probably as much as Saint Patrick’smy wife’s birthday. And as I mentioned last week (here), it is difficult to get the distribution of the age of marriage on the internet… So maybe we can build up a small model, to understand when do girls decide to get married… Consider a young girl who knows that he will not meet thousands of men willing to marry her (actually, one can consider the opposite point of view, with young man who can find only http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png girls willing to marry him, the problem can be assumed as symmetric, especially if I do not want to get feminist leagues on my back).

Assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men agree to marry her. Of course, among those http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png men, our girl wants to marry the “best” one (assume that men can be ranked objectively). Of course, she cannot meet the “best” guy immediately, so men are met randomly, and after each “interview“, either she reject him (forever, we assume she cannot get back and admit she made a mistake), or agree to marry him. An important assumption is that rejected men cannot be recalled.

From a mathematical point of view, we need to find the optimal stopping time. Here, the problem is slightly different compared with that one (with optimal time to get a bonus) or this one (with the optimal time to sit in a bar and have a beer). Here, we do not give “grades” to guy. The only thing that is observed is their relative ranks. Our girl cannot know if she’s meting the best of all men (out of http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png), but she knows if this one is better than the ones she already met. From a mathematical point of view, at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she knows the relative rank of http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (compared with the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png), not his absolute rank. We also assume that http://freakonometrics.hypotheses.org/files/2015/12/mariage01.png is known.

The optimal strategy is that she has to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png (some kind of calibration period), and then, starting at time http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, she will marry the best over the ones she has already met.
So assume that our girl already met http://freakonometrics.hypotheses.org/files/2015/12/mariage04.png guys, and decided to reject all of them. So now she’s trying to see if the http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png can be the optimal time to stop, and start looking seriously ….For an arbitrary cut-off http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png, the probability that the best applicant will show up at some time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif is http://freakonometrics.hypotheses.org/files/2015/12/wedding01.gif

http://freakonometrics.hypotheses.org/files/2015/12/wedding02.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wdeeing03.gif

The http://freakonometrics.hypotheses.org/files/2015/12/wedd02.gif term is because there is only one “best” guy, and the http://freakonometrics.hypotheses.org/files/2015/12/wedd03.gifis the probability that he shows up at time http://freakonometrics.hypotheses.org/files/2015/12/wedd01.gif (this can be visualized below)

Thus, we can write

http://freakonometrics.hypotheses.org/files/2015/12/wedding04.gif

i.e.

http://freakonometrics.hypotheses.org/files/2015/12/wedding05.gif

Thus, since the minimum of http://freakonometrics.hypotheses.org/files/2015/12/mariage18.png is obtained when http://freakonometrics.hypotheses.org/files/2015/12/mariage19.png, which is the optimal time to stop (or here to start seeking), i.e. 36.7%.

Hence, the best strategy is to reject automatically the first http://freakonometrics.hypotheses.org/files/2015/12/mariage20.png=37% of the candidates (which is the maximum value of the function above), and then to select the first one (if possible) that is better than all previous candidates.

Consider the following Monte Carlo procedure: assume that she rejects – automatically – the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png (we consider a loop with all possible values for http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png) and then gets married with the first one who is the best one she’s seen during the calibration period (or overall, which is the same),

n=100
ns=1000000
MOY1=MOY2=rep(NA,n)
for(m in 2:(n-1)){
WHICH=rep(NA,ns); MARIAGE=rep(0,ns)
for(s in 1:ns){
Z=sample(1:n,size=n,replace=FALSE)
mx=max(Z[1:m])
STOP=FALSE
for(k in (m+1):n){
if((Z[k]>mx)&(STOP==FALSE)){
WHICH[s]=k
    STOP=TRUE
MARIAGE[s]=1
}
}
}
HIS=WHICH[is.na(WHICH)==FALSE]
TH=table(HIS)
MOY1[m]=mean(HIS)
MOY2[m]=mean(HIS)*mean(MARIAGE)
THH=rep(NA,100)
THH[as.numeric(names(TH))]=as.numeric(TH)/ns
}

If we run it over all possible http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png we get

http://freakonometrics.hypotheses.org/files/2015/12/mariage-anim.gif

The “distribution” (in green) can be seen as the probability to marry the guy of level http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png, given that the first http://freakonometrics.hypotheses.org/files/2015/12/mariage02.png were rejected. The sum is not one since there is a non null probability to marry no one. Actually, the probability to get married is the following

The more she waits, the smaller the probability of getting married. But on the other hand, the more she waits, the “better” the husband…. On the graph below is plotted the rank of the guy she marries, if she gets married (it was actually the vertical plain line in red on the animation)

So there is a trade-off. If not getting married gives a 0 satisfaction (lower than finally marrying anyone), and if marrying the guy with rank http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png gives here satisfaction http://freakonometrics.hypotheses.org/files/2015/12/mariage06.png ,we have

(it was the vertical doted line in red on the animation). So it looks like it is optimal to test the first 35-38% men, and then to marry the best one she finds (if he is better than the best one she met during the “testing” procedure). So our previous analysis looks correct…

Now to go further, I have to admit that this model is known in academic literature as the secretary problem. In 1989, Thomas Ferguson wrote a nice paper inStatistical Science entitled who solved the secretary problem (here). Anthony Mucci published also an article in the Annals of Probability on possible extensions, in 1973 (here), or Thomas Lorenzen (there) in 1981. This problem is definitively an interesting one !

Estimateurs des moments

Dans le cours, nous avions vu que si http://freakonometrics.hypotheses.org/files/2015/12/moment-est.gif, un estimateur naturel pour le paramètre était http://freakonometrics.hypotheses.org/files/2015/12/moment-est2.gif. On appelle cette méthode la méthode des moments. Mais il est possible, que, plus généralement http://freakonometrics.hypotheses.org/files/2015/12/mmomot3.gif, et on pourrait considérer comme estimateur naturel

http://freakonometrics.hypotheses.org/files/2015/12/kvmoment4.gif

On avait noté en cours que le premier soucis était un soucis d’inversibilité. Pour utiliser cette méthode, il faut une relation bijective entre le moment et le paramètre. On avait vu avec la loi binomiale – que ce n’est pas le cas pour la variance, par exemple.

Le second soucis est un problème numérique: il faut inverser cette fonction (ce qui peut ne pas être trivial) mais avec une méthode de type sécante (programméeici), ça pourrait marcher. Par exemple pour la loi de Poisson, on sait que

http://freakonometrics.hypotheses.org/files/2015/12/poisson-mu-1.gif
http://freakonometrics.hypotheses.org/files/2015/12/poisson-m-2.gif
http://freakonometrics.hypotheses.org/files/2017/10/poisson-m-3.gif
http://freakonometrics.hypotheses.org/files/2015/12/poisson-m4.gif

(…etc). On peut numériquement construire des estimateurs de la manière suivante

secant=function(fun, x0, x1, tolerence=1e-07, niter=500){
for ( i in 1:niter ) {
	x2 <- x1-fun(x1)*(x1-x0)/(fun(x1)-fun(x0))
	if (abs(fun(x2)) < tolerence)
		return(x2)
	x0 <- x1
	x1 <- x2
}}
X=rpois(50,10)
m1=mean(X)
f=function(x){x*(1+x)-mean(X^2)}
secant(f,0,40)
m2=secant(f,0,50)
f=function(x){x*(1+3*x+x^2)-mean(X^3)}
secant(f,0,50)
m3=secant(f,0,50)
f=function(x){x*(1+7*x+6*x^2+x^3)-mean(X^4)}
secant(f,0,100)
m4=secant(f,0,20)

Le dernier soucis est un problème plus statistique: il semble que l’estimateur basé sur le premier moment soit meilleur que les autres. Sur le dessin ci-dessous, on a simulé 10,000 fois des lois de Poisson de paramètre 10 (qui est la vraie valeur du paramètre, ici représentée verticalement en pointillés), et on représente en rouge l’estimateur construit sur le premier moment, avec la croix au centre qui représente la valeur moyenne, et le segment qui est un intervalle de confiance à 80% de notre estimateur (i.e. les quantiles à 10% et 90% obtenus sur les 10,000 simulations),

En bleu, on a l’estimateur construit à l’aide du moment d’ordre deux, en vert du moment d’ordre trois, et en mauve du moment d’ordre quatre. Bref, plus on prend des moments d’ordre élevé, plus le biais semble important, et plus la variance de notre estimateur augmente.

Moralité, l’estimateur de la méthode des moments (et l’unique) sera construit sur les moments les plus faibles possibles…

… pour tout l’or du monde

Allez, un rapide billet sur l’or (moins statistique que celui écrit il y a 6 mois sur le prix de l’or, ici). Car ce soir, avec les enfants, on survolait le livre passionnant de Theodore Gray, sur les Atomes.
Et ma fille a voulu que je lui lise la page sur l’aurum de son nom latin, alias Au dans la classification des éléments. Bon, il faut avouer que les dessins d’atomes l’amusent peu, mais elle regardait plutôt les choses qui brillaient…
Et j’ai été surpris par la phrase suivante “tout l’or jamais extrait dans l’histoire des hommes tiendrait aujourd’hui dans un cube d’environ 18.30 mètres de coté”. J’avoue que ça a un peu cassé le mythe de plusieurs films américains, où on voit des montagnes de lingots, car finalement un cube de 18 mètres de côté, c’est relativement peu…
En fait, ça fait 6128.5m3. Et si on veut trouver le poids, en notant que la densité de l’or est de l’ordre de 19.25 (i.e. g/cm3), on doit arriver à un poids de l’ordre de 117973 tonnes. Ce qui fait son poids, j’en conviens… Maintenant si on regarde le cours de l’or, le lingot d’un kilogramme se négociait (aujourd’hui) à 32336 euros. Et bien selon ma calculatrice, on arrive à 3.815.1012 euros…

> 183^3*19.25*32336
[1] 3.814787e+12

ce qui correspond à 3814 milliards d’euros.

Alors pour les amateurs d’ordre de grandeur (ici), “le forex” (i.e. le trading sur les marchés de change) “c’est 4000 milliards de dollars échangés quotidiennement* en 2010”. Mais ces sommes échangées sur les marchés, c’est assez dur à appréhender. Un autre ordre de grandeur a été donné par le WWF, qui a annoncé (ici) qu’il était possible de faire “4000 milliards d’euros d’économies d’énergie par an”. Bref, tout d’un coup, je me suis rendu compte que ceux qui envisageraient un effondrement des marchés financiers et un retour à l’or comme valeur de référence étaient loin du compte…. Car décidément, l’or est un élément bien rare….
* ou tous les deux jours suivant ce que l’on compte, cf discussion dans les commentaires

Statistiques, de la théorie à la pratique

Afin d’illustrer un peu le cours, je mets en ligne ici. Il s’agit de coûts de sinistres ayant dépassé 1,000$ (qui ont été ordonnés de manière croissante). Pour importer la base en R, le code est simplement,

> X=read.table(
"http://freakonometrics.blog.free.fr/public/data/sinistres.txt")$x
> X
 [1] 1.003 1.016 1.023 1.027 1.037 1.039 1.0
[13] 1.061 1.072 1.078 1.082 1.087 1.094 1.0
[25] 1.110 1.112 1.117 1.132 1.138 1.141 1.1
[37] 1.180 1.186 1.187 1.190 1.193 1.203 1.2
[49] 1.321 1.326 1.338 1.342 1.343 1.344 1.3
[61] 1.428 1.432 1.442 1.457 1.463 1.466 1.4
[73] 1.551 1.553 1.566 1.584 1.632 1.695 1.6
[85] 1.881 1.893 1.897 1.958 2.000 2.175 2.2
[97] 3.045 3.103 4.495 5.614

Pour information, l’histogramme ressemble à ça

On va supposer que l’on peut ajuster une loi de Pareto, dont la fonction de survie est donnée par

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-1.gif

et de densité

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-2.gif

L’espérance est

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-3.gif

et la variance

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-5.gif
  • Estimateur de la méthode des moments
http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-6.gif

Numériquement, on obtient

> mean(X)/(mean(X)-1)
[1] 2.946827
  • Estimateur du maximum de vraisemblance

On peut écrire la vraisemblance, et on recherche – numériquement – la maximum (ou le minimum de l’opposé de la log-vraisemblance car la fonction recherche ici les minimum),

> f=function(x,a){
+ a/x^(a+1)
+ }
> LogV = function(a,echantillon){
+ -sum(log(f(echantillon,a)))
+ }
> optim(fn=LogV,par=2,echantillon=X,method="BFGS")$par
[1] 2.845093

Mais on aurait pu faire mieux… car ici le maximum de vraisemblance peut s’écrire analytiquement

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-7.gif
> length(X)/(sum(log(X)))
[1] 2.845093

(tout va bien, on trouve pareil).

  • Estimateur par régression linéaire

On peut noter que

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-8.gif

ce qui veut dire qu’on peut tracer le nuage de points du log des couts, et s’ils sont alignés suivant une droite (passant par l’origine), la pente nous donnera un estimateur de notre paramètre. Le plus simple est de prendre la pente de la droite qui passe par l’origine, et par le centre de gravité du nuage,

> Z=X
> Y=log((length(Z):1)/length(Z))
> X=log(Z)
> plot(X,Y)
> -mean(Y)/mean(X)
[1] 2.753414
  • Estimation à l’aide de la médiane

Enfin, notons que la médiane est

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-9.gif

Un estimateur naturel est alors

http://freakonometrics.blog.free.fr/public/perso2/pareto-2700-11.gif

Numériquement, c’est alors tout simple,

> log(2)/log(quantile(X,.5))
     50% 
2.413719

Moralité, on a quatre estimateurs, qui nous donne quatre valeurs numériques différentes

2.9468 2.8451 2.7534 2.4137

Pour ceux qui veulent trouver d’autres estimateurs (car on peut en construire des centaines), une référence intéressante est la suivante,
Quandt, R. E. (1966). Old and new methods of estimation and the Pareto distribution, Metrika, 10, 55-82 (ici)
Le but de la statistique mathématique est de mieux connaître les propriétés (fondamentales) de ces estimateurs afin de choisir le “meilleur“. Et c’est un problème important, par exemple si on souhaite mettre en place un contrat de réassurance (et transférer les très grands sinistres). Par exemple si on veut transférer les sinistres dépassant 100,000$, la probabilité de dépasser ce seuil est 10 fois plus grande avec le dernier estimateur par rapport au premier…

> 1/100^2.4137
[1] 1.48799e-05
 
> 1/100^2.9468
[1] 1.277615e-06

Moralité, les résultats théoriques que l’on aborde en ce moment vont nous aider à résoudre des questions très pratiques…. à suivre.

When will my papers appear as references (if they do…) ?

Following my post on citations in academic journals, I wanted to go one step further in the understanding of the dynamic of citations. So here, the dataset looks like that: for each article, we have the name of the journal, the year of publication (also the title of the article, but here we do not use it, as well as the authors), and more interesting, the number of citations in journals (any kind of academic journal) published in 1996, 1997, …, 2011. Of course, articles published in 1999 might have their first citation only starting in 1999.

base[1000:1002,]
     Publication.Year
7188             1999
7191             1999
7195             1999
     Document.Title
7188 Sequential inspection 
7191 On equitable resource approach
7195 Method for strategic  
                                        Authors     ISSN       Journal.Title
7188                         Yao D.D., Zheng S. 0030364X Operations Research
7191                                    Luss H. 0030364X Operations Research
7195 Seshadri S., Khanna A., Harche F., Wyle R. 0030364X Operations Research
     Volume Issue X139 DEV1996 DEV1997 DEV1998 DEV1999 DEV2000 DEV2001 DEV2002
7188     47     3    0       0       0       0       0       1       0       2
7191     47     3    0       0       0       0       0       0       2       0
7195     47     3    0       0       0       0       0       0       0       0
     DEV2003 DEV2004 DEV2005 DEV2006 DEV2007 DEV2008 DEV2009 DEV2010 DEV2011
7188       0       0       0       1       0       0       0       0       0
7191       3       4       1       4       4       8       4       6       1
7195       0       1       2       2       1       0       1       0       0
     X130655 X0 X130794
7188       4  0       4
7191      37  0      37
7195       7  0       7

The first step is to aggregate data, not to look at each article, but to look at all paper published in 1999 (say). And then, we look at the number in citations the year of publication, the year after, two years after, etc. It will appear in a triangle since if we look at articles published in 2010, there is only on possible year for citations (2010, since I removed 2011).

VOL=rev(unique(base$Volume))
VOL=VOL[is.na(VOL)==FALSE]
TRIANGLE=matrix(NA,16,16)
for(v in VOL){
k=k+1
sb=base[base$Volume==v,9:24]
sb=sb[is.na(sb[,1])==FALSE,]
TRIANGLE[k,1:(17-k)]=apply(sb,2,sum)[k:16]}

Then, a standard idea (at least in insurance business, for claims payment development) is to consider that data are Poisson distributed, and the number of citations should depend on the year of publication of the article (a row effect) and the development (how many years after are we looking at, i.e. a column effect). More formally, let http://freakonometrics.blog.free.fr/public/perso2/citationD01.gif denote the number of citations of articles published year http://freakonometrics.blog.free.fr/public/perso2/citD02.gifduring year http://freakonometrics.blog.free.fr/public/perso2/citD03.gif (or after http://freakonometrics.blog.free.fr/public/perso2/citD04.gif years). And we assume that http://freakonometrics.blog.free.fr/public/perso2/citD05.gif

TRIANGLE=TRIANGLE[-16,]
TRIANGLE=TRIANGLE[,-16]
Y=as.vector(TRIANGLE)
YEAR=rep(1996:2010,15)
DEV =rep(1:15,each=15)
baseT=data.frame(Y,YEAR,DEV)
reg=glm(Y~as.factor(YEAR)+as.factor(DEV),
data=baseT,family=poisson)

Since those are incremental values, in order to look at the paper of distribution, we need to sum them on a line. Thus, we can plot

http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-1.gif
http://freakonometrics.blog.free.fr/public/maths/dev-cl-biblio-2.gif

(because we used factors, the first component has been replaced by the constant in the regression) or a normalized version to compare among journals. For instance, we would like to get 100 citations over 15 years.

DYN=exp(c(reg$coefficients[1],reg$coefficients[1]+
reg$coefficients[16:29]))
DYNN=cumsum(DYN)/sum(DYN)
plot(0:15,DYNN)

And this is what we get, for several academic journals,

The pattern is rather different. For instance, in Health Economics, citations is a quick process: more than 40% of citations obtained over 15 years, were obtained during the first 4 years. On the other hand, in the Journal of Finance, it is much smaller: less than 15% of the citations were obtained during the first 4 years (on average). So it means that comparing citation based index (namely g or h) is a difficult exercise, especially with you researchers in different areas. The same gor index for young researcher, publishing either in Stochastic Processes and their Applications or Annals of Statistics, means that after 3 years, it can be 50% higher.


Now it is possible to look more into details, with below JRSS-B (on applied statistics). Note that here, citations come extremely slowly… to it might not be a good “strategy” (assuming that a researcher’s target is simply to get – quickly – a high citation index) for a young researcher to publish in JRSS-B

On the other hand, Biometrika is much faster (both are on applied statistics, but we’ve seen here that they were not in the same cluster)

We can also observe that Annals of Probability
and Stochastic Processes and their Applications

have (almost) similar patterns (SPA might be a bit faster). Anyway, I have been surprised to see that in theoretical journals citations are extremely fast. Especially if we compare with the Journal of Finance for instance

where I though citations were extremely fast. But I might have a non-correct interpretation: it might simply mean that in the Journal of Finance it is common to cite old papers (published 10 or 15 years ago), maybe more common that in stochastic processes…
Anyway, all suggestions about the interpretation are welcomed !

Talk at Laval University on natural catastrophes

On Tuesday, I will be giving a talk at the Département de finance, assurance et immobilier, at the Faculté des sciences de l’administration. The talk will be on natural catastrophes, and on government intervention. The slides will be upladed soon (since we are still revising the paper we wrote Benoît, cf here: actually, we did not look at EU maximizers but RDEU maximizer with a quantile-based distortion). I will write a more detailed post once the working paper is finished.

Going further on journal clustering: looking within a cluster

Following my post on academic journals, Miss Lambda asked me about journals in a very specific area, namely agricultural, environmental and energy economics. I found it interesting since I do not have any idea about journal that can be in that domain of research. So I looked at some journals form the French CNRS list (online here). Hence, I have been looking for words in the title of 26,000 articles, published in 29 journals, in Agricultural, Environmental and Energy Economics. I considered American Journal of Agricultural Economics (AJEE), Ecological Economics (EE), Journal of Environmental and Economic Management (JEEM), Climate Policy (CP), Energy Economics (EE), Energy Journal (EJ), Energy Policy (EP), Environment and Planning A, B, C, D (EP-A, B, C, D), Environmental and Resource Economics (ERE), Environmental Modeling and Assessment (EMA), European Review of Agricultural Economics (ERAE), Resource and Energy Economics (REE), Agricultural Economics (AE), AMBIO: A Journal of the Human Environment (AMBIO), Australian Journal of Agricultural and Resource Economics (AJARE), Canadian Journal of Agricultural Economics (CJAE), Climatic Change (CC), Ecological Modeling (EM), Energy Studies Review (ESR), Environment and Development Economics (EDE), Environmental Science and Policy (ESP), Environmental Values (EV), Food Policy (FP),
Global Environmental Change (GEC), Journal of Agricultural Economics (JAE), Society and Natural Resources (SNR), Water Resources Research (WR).

Now if we look at the principal component analysis, and projection of the journals on the first two axis, we have

On that graph, it is hard to say anything… The only thing is see is that on the lower part, we have journals focusing on modeling issues.

The top of the most common words in those journals is the following,

> colnames(MATRICE[,1:32])
[1] "climate"       "analysis"      "environmental" "change"
[5] "energy"        "model"         "policy"        "case"
[9] "water"         "economic"      "management"    "food"
[13] "study"         "development"   "market"        "agricultural"
[17] "approach"      "effects"       "carbon"        "global"
[21] "impact"        "production"    "land"          "china"
[25] "assessment"    "forest"        "impacts"       "urban"
[29] "demand"        "spatial"       "emissions"     "modeling"

If we look at their projections on the first two axis, we have (it looks like the second axis has been inverted here)

i=or if we focus on the top 30

Now, if we look at clusters, and use a hierarchical model, we obtain

So here, a dozen journals are extremely close. They are more focusing on agricultural issues. Note that Energy Economics and Energy Policy are in the same cluster, but quite far away from Energy Journal (which looks strange). We can also observe that Climatic Change alone, far away from all the other journals (actually, it is a journal were I just got a paper accepted… and since my areas of research are quite far away from agricultural economics, I can understand that).