Modeling Dynamic Incentives: Application to Basketball

I will give a talk on “Modeling Dynamic Incentives: Application to Basketball” at the GERAD (Groupe d’études et de recherche en analyse des décisions) on June, 10th June, 6th. This is some joint work with Nathalie Colombier and Romuald Elie

An important aspect of the strategy of most organizations is the provision of incentives to the employees to meet the organization’s objectives. Typically this implies tying pay to performance (see Prendergast, 1999). In order to reward employees for their effort, firms spend considerable resources on performance evaluations. In many cases, evaluation consists of comparing actual performance to a pre-defined individual target. Another frequently used format is relative performance evaluation. Relative performance evaluation may motivate employees to work harder.But it may also be demoralizing and create an excessively competitive workplace, which may hinder overall performance; see Lazear (1989). Determining the overall impact of relative performance evaluation is crucial for companies. Economic research on relative performance evaluation has mainly focused on the comparison of final performances between competitors,like in tournament theory, and on quantitative and subjective performance ratings (Lazear and Gibbs, 2009). In contrast, what happens during a competition and the impact of feedback frequency on effort have so far received little attention. Following Berger and Pope (2011), we decided to use a basketball application to get a better understanding of the role of the feedback information. Sports datasets allow to observe score and team behavior continuously (during a game but also during the season) which can be use as a proxy of the effort. Berger an Pope (2010) asked ”can loosing lead to winning ?” looking at the impact of the halftime score difference on winning probability in NCAA (college) and NBA (pro) games. More precisely, they studied whether a team loosing at halftime is more likely to win than expected using a logit model. They find that usually the higher the score difference the more likely the are to win. But if the halftime score difference is around 0 they observe a discontinuity: loosing with a small difference (e.g. down by 1 point) can lead to increase the effort and win the game. In this paper we try answer the question ”when loosing lead to winning ?”.

Somewhere else, part 131

Some writings worth reading

Earlier this month a Japanese researcher was found guilty of scientific misconduct and two groundbreaking studies published in Nature were retracted. This is a symptom of a broken system. Ask most scientists why they pursued a career in research and the majority will tell you that they had an innate passion for discovery. However, the current ‘publish or perish’ culture in academia is arguably impeding the discovery process. Even Nobel laureates have spoken out about the negative impact of this culture, including Peter Higgs, who told the Guardian that even he would not be productive enough to compete in the current academic system. To be a successful academic you must publish research in peer-reviewed academic journals, and preferably ‘high impact’ journals such as Science and Nature. While the intention behind peer review – to maintain standards of quality – is critically important, the implementation of this process may actually be contributing to a systemic flaw. [to be continued…]

et un peu de lecture en français,

Did I miss something?

Somewhere else, part 130

Some writings worth reading

…this is because student evaluations are useless. Ostensibly, SETs give us valuable feedback on our teaching effectiveness, factor importantly into our career trajectories, and provide accountability to the institution that employs us. None of this, however, is true. First, evaluations promote sucking up to customers—I’m sorry, students—often at the expense of teaching effectiveness. A recent comprehensive study, for example, showed that professors get good evaluations by teaching to the test and being entertaining. Student learning hardly factors in, because (surprise) students are often poor judges of what will help them learn. (They are, instead, excellent judges of how to get an easy A.) Asking students to evaluate their professors anonymously is like Trader Joe’s soliciting Yelp reviews from a shoplifter. Indeed, some of the worst evaluations I ever got were for hands-down the best teaching I’ve ever done—which I measured by the revolutionary metric of “the students were way better at German walking out than they were walking in.” Alas, this took work, and some of the Kinder attempted to stage a mutiny on evaluation day. Little did they know that a “too much work” dig is the #humblebrag of the academy—and, indeed, anything less on evals is seen as pandering at best, and out-and-out grade-bribery at worst. [to be continued…]

et un peu de lecture en français,

Did I miss something?

There is no “Too Big” Data, is there?

A few years ago, a former classmate came back to me with a simple problem. He was working for some insurance company (and still is, don’t worry, chatting with me is not yet a reason for dismissal), and his problem was that their dataset was too large to run (standard) codes to get a regression, and some predictions. My answer was too use sub-sampling techniques, and I still believe that this might be a good idea (actually, I wrote a long post, on that issue, entitled too large datasets for regression ? What about subsampling). But I wanted to go further, since I did not discuss predictions obtained with sub-sampling techniques.

So, consider here a logistic regression , based on some covariates. We have  explanatory variables ( will be large, but not too large) and  observations (with ), Here we have a big (potentially) matrix product  i.e. with a large  matrix. Here, assume that we have a  matrix, with  individual observations, and  possible variables (and the intercept). Actually, in my model, only  variables were actually used in the real model. Assume further that explanatory variables are – potentially – correlated.

n=100000
library(mnormt)
k=50
r=.2
Sig=matrix(r,k,k)
diag(Sig)=1
X=rmnorm(n,varcov=Sig)
U=pnorm(rmnorm(n,varcov=Sig))
p=exp(-U[,1]-X[,1]-1)/(1+exp(-U[,1]-X[,1]-1))
Y=rbinom(n,size=1,p)
df=data.frame(Y,U,X)
names(df)=c("Y",paste("U",1:50,sep=""),paste("X",1:50,sep=""))
reg=glm(Y~.,data=df,family="binomial")

In some sense, it is not too big, since we can run a regression on that dataset with a simple laptop (even if it can still be seen as a large dataset, in the sense discussed in http://businessweek.com/…). But let us consider an alternative strategy, to be able to get some predictions – or some model – in the case we cannot run a regression. Two strategies will be compared,

  • generate   datasets with  observations, by sub-sampling
  • generate   datasets with  observations, by sub-sampling,

On each dataset, we can now run a regression, and compare the estimation of the coefficients with the “true” regression (on the whole dataset, since here, we can still run it). Then, since out of  explanatory variables, only  were actually used to generate the ouput, we should probably remove unnecessary variables in our model. So, some stepwise procedures were used.

L1=L2=L1s=L2s=list()
library(MASS)
ns1=n/10
ns2=n/100
for(s in 1:100){
i=sample(1:n,size=ns1,replace=TRUE)
reg_sub=glm(Y~.,data=df[i,],family="binomial")
L1[[s]]=reg_sub
L1s[[s]]=stepAIC(reg_sub)
i=sample(1:n,size=ns2,replace=TRUE)
reg0=glm(Y~.,data=df[i,],family="binomial")
L2[[s]]=reg_sub
L2s[[s]]=stepAIC(reg_sub)
}

For instance, if we consider the very first coefficient which should appear in the regression (let us forget about the intercept), or the second coefficient (which was not considered to generate the dataset), we get

VC=c(-1,-1,rep(0,49),-1,rep(0,49))
coef=function(k){
C1=unlist(lapply(L1,function(x) coefficients(x)[k]))
C2=unlist(lapply(L2,function(x) coefficients(x)[k]))
m=summary(reg)$coefficients
u=seq(quantile(C2,.2),quantile(C2,.8),length=501)
v=dnorm(u,m[k,1],m[k,2])
plot(u,v,col="white",xlab="",ylab="",axes=FALSE)
axis(1)
polygon(c(u,rev(u)),c(v,rep(0,length(u))),col="grey",border=NA)
abline(v=VC[k],lty=2)
boxplot(C1,horizontal=TRUE,add=TRUE,at=max(v)/3)
boxplot(C2,horizontal=TRUE,add=TRUE,at=max(v)/3*2)
}

coef(2)

where the density in grey is the Gaussian density of some estimator obtained from the large (and complete) dataset and boxplots are estimates obtained on sub-samples (without the stepwise procedure, just to make sure I will keep that variable).

For coefficients associated to variables not used to generate the dataset, we get graphs like the following

So, clearly, the smaller the dataset, the large the dispersion of the estimates. But far, nothing new. In my previous post – too large datasets for regression ? What about subsampling – my point was to discuss computational times, and a possible optimal size of sub-datasets. Now, what about the impact of sub-sampling on predictions. Here, we fit a model on a small sample, but we can get a prediction on the whole dataset. In order to describe the goodness of fit of our regression model, let us plot ROC curves. More specifically, three kinds of lines will be plotted,

  • the ROC curve for the ‘s obtained with the model on the complete dataset [red]
  • the ROC curves for the ‘s obtained with the model on the ‘s subsample [light blue]
  • the ROC curve for the  ‘s obtained by averaging the previous estimates [blue]

S=predict(reg,type="response")
Y=def$Y
M.ROC=ROC.curve(S,Y)
plot(M.ROC[1,],M.ROC[2,],type="s",col="red")

Z=df$Y*0
for(si in 1:100){
S=predict(L1s[[si]],type="response",newdata=df)
Z=Z+S
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="light blue")
}

S=Z/100
Y=df$Y
M.ROC=ROC.curve(S,Y)
lines(M.ROC[1,],M.ROC[2,],type="s",col="blue",lwd=2)

If we consider sub-samples of size , we get the following, and when we consider sub-samples of size , without the stepwise procedure (most variables have a small coefficient, not significant) and after the stepwise procedure Clearly – and that should not be a surprise – looking at predictions when the model was fitted on !% of the dataset is not great (ROC curves are substantially below the red ROC curve). But the interesting point is that averaging yields great results. In terms of ROC curve, we have the same

  • running one regression on our   matrix
  • averaging prediction after running  regressions on some   matrices

Except that the first one might not be possible to run, if the dataset was larger. And I have to admit that with the stepwise procedure, with variables (where should – theoretically – be renoved), it took some time! But still. I have the feeling the sub-sampling is extremely promising in the context of too large datasets.

Modèles de Prévision

Cet session d’été, je redonne le cours ACT6420, modèles de prévision. Le plan de cours est maintenant en ligne (comme souvent, les salles de cours ne sont pas encore attribuées définitivement mais elles seront disponibles sur le site de l’UQAM). Les transparents pour la première partie de cours sont en ligne (il s’agit de rappels de statistique, et d’une introduction aux modèles de régression).

Des exercices et des codes informatiques seront mis en ligne sur /courses/modeles-de-prevision.

Durant le cours, j’essayerais d’illustrer avec le maximum d’exemples sur des données réelles, à partir de codes R. Quelques séances de démonstrations se feront en salle informatique, afin de découvrir les rudiments de R. Pour aller plus loin, je recommande

  • “R pour les débutants”d’Emmanuel Paradis (PDF)
  • “Brise Glace-R” d’Andrew Robinson er Arnaud Schloesing (PDF)
  • “Introduction à la programmation en R” de Vincent Goulet (PDF)

ainsi que quelques tutoriels en ligne, par exemple, dans un ancien billet, mais aussi sur https://youtube.com/.

Examen, Séries Chronologiques

Après les exposés des dernières séances, l’examen du cours MAT8181, Séries Chronologiques avait lieu ce matin (et devrait finir dans quelques minutes, avec un peu de temps supplémentaire pour certain, compte tenu de la panne de métro qu’on a eu la joie de subir). L’énoncé est en ligne, et j’ai aussi écrit quelques éléments de correction. En cas de désaccord (mineur ou majeur) avec mes réponses, merci de me le faire savoir rapidement !

Somewhere else, part 129

Some writings worth reading, here and there (mainly there)

There is a fundamental difference between science communication and science journalism. At the science communication end of the spectrum sit the stories that show people how exciting science can be, the discovery of a wonder material, perhaps, or a new subatomic particle. Explaining the significance of sightings of the Higgs boson or of gravitational waves from the early Universe takes real skill. Science journalism’s job is to tell the stories that explore the murky underbelly of science, like the selling of bogus stem-cell cures to vulnerable patients. It is science journalism that will expose the rushed policy-making, the undisclosed profiteering, the conflicts of interest and the vested interests, the bad experiments, or the out-and-out frauds. [to be continued…]

Nowadays, it seems as anomalous to have knowledge workers serve as professional leaders as it once did to have scientists in the boardroom. It was previously thought that leadership is less necessary in knowledge-intensive organizations, where experts were assumed to be superior because they were motivated by intellectual pleasure rather than such extrinsic motivations as profit growth and cost targets. This difference in attitude is evident in many areas of society, not least in hospitals in the United States and the United Kingdom, where knowledge-intensive medical practitioners operate separately from managers. Hospitals used to be run by doctors; today, only 5% of US hospitals’ CEOs are medical doctors, and even fewer doctors run UK hospitals. “Medicine should be left to the doctors,” according to a common refrain, “and organizational leadership should be left to professional managers.” But this is a mistake. Research shows that higher-performing US hospitals are likely to be led by doctors with outstanding research reputations, not by management professionals. The evidence also suggests that hospitals perform better, and have lower death rates, when more of their managers up to board level are clinically trained. We see similar findings in other fields. My research shows that the world’s best universities, for example, are likely to be led by exceptional scholars whose performance continues to improve over time. Departmental-level analysis supports this. A university economics department, for example, tends to perform better the more widely its head’s own research is cited. With experts in charge, it may not always look like there is an effective reporting structure in place. But, as the academic saying goes: just because you cannot herd cats, does not mean there is not a feline hierarchy. As with cats, academics operate a “relative hierarchy” in which the person in charge changes, depending on the setting. [to be continued…]

In situations where decision-making is hard, a possible procedural preference arises: the decision-maker may wish for the decision to be taken away from herself. Her cognitive or emotional cost of deciding may outweigh the benefits that arise from making the optimal choice. For example, the decision-maker may prefer not to make a choice without having sufficient time and energy to think it through. Or, she may not feel entitled to make it.Or, she may anticipate a possible disappointment about her choice that can arise after a subsequent resolution of uncertainty. Waiving some or all of the decision right may seem desirable in such circumstances even though it typically increases the chance of a suboptimal outcome. The difficulty of such preferences is that they are non-consequentialist and are therefore excluded by most models of choice such as expected utility. In particular, flipping a coin between different choice options contradicts expected utility theory except if the decision-maker is exactly indiffrent between these options. Yet people regularly do flip coins or revert to other random decision aids. More general than expected utility theory, two closely related axioms of choice stochastic dominance and betweenness postulate that whenever the decision-maker has a strict preference for one of the options, she makes the choice herself rather than delegate it to randomness. [to be continued...]

et un peu de lecture en français

Did I miss something?

Somewhere else, part 128

Some writings worth reading

Once advertisers
co-opt an Internet trend,
you know it’s over.

et un peu de lecture en français

Did I miss something?

Copula Density Estimation

The joint paper, written with Gery Geenens and Davy Paindaveine, entitled Probit transformation for nonparametric kernel estimation of the copula density” is now online on http://arxiv.org/abs/1404.4414

Copula modelling has become ubiquitous in modern statistics. Here, the problem of nonparametrically estimating a copula density is addressed. Arguably the most popular nonparametric density estimator, the kernel estimator is not suitable for the unit-square-supported copula densities, mainly because it is heavily affected by boundary bias issues. In addition, most common copulas admit unbounded densities, and kernel methods are not consistent in that case. In this paper, a kernel-type copula density estimator is proposed. It is based on the idea of transforming the uniform marginals of the copula density into normal distributions via the probit function, estimating the density in the transformed domain, which can be accomplished without boundary problems, and obtaining an estimate of the copula density through back-transformation. Although natural, a raw application of this procedure was, however, seen not to perform very well in the earlier literature. Here, it is shown that, if combined with local likelihood density estimation methods, the idea yields very good and easy to implement estimators, fixing boundary issues in a natural way and able to cope with unbounded copula densities. The asymptotic properties of the suggested estimators are derived, and a practical way of selecting the crucially important smoothing parameters is devised. Finally, extensive simulation studies and a real data analysis evidence their excellent performance compared to their main competitors.”

Les Arbres de Classification

J’animerai une formation lundi 28 de 14:00 à 16:00 au local N-6320 de l’UQAM sur le thème introduction aux arbres de classification. Cette formation est organisée dans le cadre des séminaires en méthodes d’analyses quantitatives et qualitatives qui se tiennent régulièrement depuis un peu plus d’un mois. animé par le collectif pour le développement et les applications en mesure et évaluation (Cdame). Les slides sont disponibles en pdf (il y a quelques animations, qui ne passent qu’avec Acrobat)

La base utilisée tout au long des exposés est la suivante
> MYOCARDE=read.table("http://freakonometrics.free.fr/saporta.csv",head=TRUE,sep=";")

Somewhere else, part 127

Some writings worth reading

et un peu de lecture en français,

avec une pensée pour les amis à Valparaíso !

Lectures, #MyTopTenBooks

Histoire de poursuivre sur la lancée de plusieurs autres bloggers (je pense à Martin Grandjean qui m’a fait découvrir l’initiative) et plusieurs utilisateurs de Twitter, j’ai eu envie de participer à la discussion autour du hashtag #MyTopTenBooks. A la vue de plusieurs photos qui montraient des livres qui m’avaient plu, j’ai eu envie d’en découvrir plusieurs. A mon tour cette fois, avec une liste de 10 livres (qui a été difficile à établir…), de susciter des envies (j’espère….)

  • Les Classiques de Camille, par Camille le Fol. Pour prendre un peu de temps pour expliquer mon choix, il faut peut être que j’explique que j’adore faire la cuisine. Depuis la naissance de mon fils, c’est ma principale contribution à la maison: je cuisine tous les repas, ou presque. Et j’adore ça ! Et bien entendu, je suis en charge du marché (j’aurais pu citer deux livres que j’ai adoré, qui m’ont permis de découvrir plusieurs producteurs et marchands, au marché de Lices, à Rennes, et au marché Jean Talon, à Montréal – oui, on a toujours essayé d’habiter proche d’un marché). Il y a plusieurs livres de cuisine que j’adore, mais le livre de Camille est probablement celui que j’ai le plus utilisé: il est simple et bon. C’est dedans que j’ai trouvé ma recette de blanquette de veau
  • Pinocchio, de Winshluss. Je ne sais pas trop pourquoi je mets ce livre en fait. Quand ils étaient petits, les enfants adoraient Wizz et Buzz, c’est probablement une des bandes dessinées qu’ils ont le plus lu. Et Pinocchio a été une claque (pour moi en tous cas). Ce livre est incroyable ! J’ai attendu longtemps avant de le laisser entre les mains des enfants, mais ils ont fini par le trouver tous seuls, et l’adorer ! Avec Lapinot et les carottes de Patagonie, c’est une des bandes dessinées que je relis le plus souvent, en découvrant toujours des passages différents qui me surprennent (ou me dérangent, dans le cas de Pinocchio). Je pense que je l’ai mis dans cette liste car ce livre m’a réellement marqué !
  • Theory of Decision under Uncertainty de Itzhak Gilboa. Mon premier livre de boulot dans la liste. Ce livre est incroyable. C’est le livre que j’aurais rêvé écrire… il est limpide, et répond à toutes les questions que je me suis longtemps posé sur le sujet. C’est aussi le cours que j’aurais rêvé donner ! Ce livre peut parler à tout le monde, c’est probablement sa grande force !
  • Notes de Boulet. J’ai mis ici le Tome 7 car il fallait en mettre un. Oui, je l’ai déjà dit sur ce blog, j’adore Boulet et son blog. C’est créatif, et drôle.
  • The Barnhart Concise Dictionary of Etymology. J’ai une passion pour les dictionnaires. Et plus particulièrement, les dictionnaires étymologiques. Je suis fasciné, tout simplement. Et ce dictionnaire est le seul que je possède en langue anglaise. Et c’est un puit sans fin, dès que je commence à le lire…
  • Une saison de machettes (et plus généralement, la trilogie des Récits des Marais Rwandais) de Jean Hatzfeld. Cette trilogie est horrible. En cette période d’anniversaire des 20 ans du Génocide au Rwanda, c’est le livre à lire (mais on n’en sort pas indemne). J’ai acheté ces livres suite à un court séjour à Abidjan, pour un cours, alors que j’étais accueilli par un membre de l’Ambassage, qui était alors en poste au Rwanda (à l’Ambassade de France) lors des massacres… Ca reste une rencontre forte qui m’aura beaucoup marqué !
  • Fantôme de Jo Nesbø. En fait, je pourrais mettre tous les livres de Jo Nesbø, sans exception (y compris les livres pour enfants). Un des auteurs que je suis régulièrement depuis quelques années.
  • A Study in Scarlet d’Arthur Conan Doyle. Un souvenir de jeunesse. Petit, je n’aimais pas lire. Je pense que les seuls livres que j’acceptais de lire étaient les livres de Sherlock Holmes. J’ai bien entendu suivi le dessin animé d’Hayao Miyazaki (qui avait peu à voir avec les livres) et la série, qui passait le dimanche soir, sur FR3, avec Jeremy Brett. C’est ma première vraie rencontre avec la lecture. Maintenant, pour être tout à fait honnête, ce qui me plaisait également, c’est que j’ai grandi avec un prénom peu commun (à l’époque). Et le fait qu’on partage le même prénom créait une connivance entre nous. Il y avait même un Arthur Charpentier dans ce livre ! Mais maintenant que je peux lire en anglais, je recommande de lire le livre en version originale.
  • Exit Music de Ian Rankin. Oui, on reste à Edinbourgh avec un autre auteur de polar que j’adore ! Là encore, je pourrais mettre dans la liste tous les livres de John Rebus. Et là encore, je recommande la lecture dans la langue originale, qui est riche (et parfois compliquée à suivre pour un non-anglophone, mais peu importe, l’effort en vaut largement la peine)
  • Modelling Extremal Events: for Insurance and Finance de Paul Embrechts, Claudia Klüpelberg, et Thomas Mikosch. Le premier livre jaune que j’ai acheté alors que j’étais encore étudiant, en 1998. Alexander McNeil avait donné un cours passionant, inspiré du livre. Ce n’est pas le livre le plus récent sur les extrêmes (je pense au prodigieux Statistics of Extremes, de Jan et Johan). Mais c’est le livre que je relis, chaque fois que je dois donner un cours sur le sujet. Je préfère certaines preuves du livre de Sidney Resnick, et le livre de Jan et Johan sur les aspects inférentiels, mais la structure du livre est limpide, et j’adore les remarques et les petits commentaires, incroyablement éclairants.

En volant un peu la fin du billet de Martin Grandjean je vous invite à faire de même, en partageant vos sélections de livres (que j’essayerais de suivre encore via #MyTopTenBooks). Et si mes titres vous laissent à penser que je devrais en aimer d’autres, n’hésitez pas à m’en faire part, via les commentaires !