Tag Archives: Montréal

Workshop on Trustworthy AI, in Montreal

This Monday, a Workshop on Trustworthy AI will be held May 27, 2024 in Montreal.

We will be there with Agathe and Olivier, to chat with people who might have some interest

Here are our posters. I wil talk about discrimination and insurance

Agathe will explain why callibration of scores is important,

and finally, Olivier will talk about building (causal) graphs for fairness

 

Fresh from the oven…

14 litres d’encre de chine, 30 pinceaux, 62 crayons à mine grasse, 1 crayon à mine dure, 27 gommes à effacer, 38 kilos de papier, 16 rubans de machine à écrire, 2 machines à écrire, 67 litres de bière ont été nécessaires à la réalisation de cette aventure…

(Goscinny and Uderzo (1965*), Astérix et Cléopâtre)

Almost better than hot, freshly baked bagels…

the textbook Insurance, Biases, Discrimination and Fairness is now out, and just arrived today ! Even though I’ve spent so much time re-reading it, getting nauseous, checking references, quotes, reworking graphics, re-launching codes, etc., it’s still an immense feeling of pride to open your book for the very first time.

Astérix et Cléopâtre est le dernier Astérix de la fameuse Collection Pilote, comme me le rappelait Michel Bera (professeur émérite du CNAM, rattaché à la Chaire de modélisation statistique du risque, et mémoire de la bande dessinée francophone, le “B” du fameux “BDM”, Trésors de la bande dessinée). “Lorsque la collection Pilote a basculé en éditions avec les titres des seuls Asterix dans le menhir, je pense que la phrase a disparu“… C’était la version qui était chez mes grands parents, et que je (re)dévorais, tous les ans, quand j’étais petit.

2024 Optimization Days, (algorithmic) collusions in games

Tomorrow, I will attend the 2024 Optimization Days, in Montréal. I will present some work we did last Fall with Philipp Ratz and Suzie Grondin, on (algorithmic) collusions in games, “Market Pricing with Reinforcement Learning” (the paper will be available soon)

Several recent articles have attempted to gain a better understanding of algorithmic collusion (Calvano et al. (2020), Klein (2021), Banchio & Mantegazza (2022) Rocher et al. (2023)). For example, in Calvano et al. (2020), a simulation study showed that for a simplified market environment, basic Q-Learning Agents can learn to collude tacitly, in order to propose higher prices and increase their combined profit. Inspired by some Iterated Prisoners Dilemma, we derive some reinforcement learning algorithm to investigate and discuss several recent results and their robustness, and explain how reinforcement learning differs from simpler strategies and which conditions lead to unfavorable outcomes from a consumer perspective. In particular, we first describe the reinforcement learning problem in a more general manner and investigate the influence of the hyper-parameters. We then consider two situations separately. One, similar in spirit to Rocher et al. (2023), assumes that the market is in equilibrium and that a general agent tries to exploit a pricing strategy of an incumbent agent. The second, more general, approach consists of an agent continuously updating their own policy.

The starting point was Calvano et al. (2020),

For classical games, the mathematical framework is the following

for example, with the prisoner’s dilemma

Then, consider repeated games, and possible collusion

The next step is to include randomness, with (dynamic) stochastic games

and standard equations

(I describe quickly the different concepts). Finally, we can move from here to reinforcement learning, and Q-learning

The idea will be to play (or to interact) to learn that matrix

with the following interpretations, for the different parameters

Then, we will play a little bit, on the framework introduced to present the prisoner’s dilemma, for instance to understand the importance of \beta, using in the \epsilon-greedy approach, with \epsilon_t=\exp(-\beta t)

That is our first approach to the concept of collusion : agents don’t need to “cooperate” to have collusion

Then, we will use the experiment of Calvano et al. (2020) to get more complex discussions…

Econometrics Seminars at Université de Montréal

This Thursday, I will present at the CIREQ Séminaire Marcel-Dagenais en Économétrie at Université de Montréal, ou paper Optimal Transport for Counterfactual Estimation: A Method for Causal Inference, written with Emmanuel Flachaire and Ewen Gallic.

Many problems ask a question that can be formulated as a causal question: “what would have happened if…?” For example, “would the person have had surgery if he or she had been Black?” To address this kind of questions, calculating an average treatment effect (ATE) is often uninformative, because one would like to know how much impact a variable (such as skin color) has on a specific individual, characterized by certain covariates. Trying to calculate a conditional ATE (CATE) seems more appropriate. In causal inference, the propensity score approach assumes that the treatment is influenced by x, a collection of covariates. Here, we will have the dual view: doing an intervention, or changing the treatment (even just hypothetically, in a thought experiment, for example by asking what would have happened if a person had been Black) can have an impact on the values of x. We will see here that optimal transport allows us to change certain characteristics that are influenced by the variable we are trying to quantify the effect of. We propose here a mutatis mutandis version of the CATE, which will be done simply in dimension one by saying that the CATE must be computed relative to a level of probability, associated to the proportion of x (a single covariate) in the control population, and by looking for the equivalent quantile in the test population. In higher dimension, it will be necessary to go through transport, and an application will be proposed on the impact of some variables on the probability of having an unnatural birth (the fact that the mother smokes, or that the mother is Black).

Slides are available online. I will try to mention additional papers published this year, such as Fairness in Multi-Task Learning via Wasserstein Barycenters, Mitigating Discrimination in Insurance with Wasserstein Barycenters or more recently A Sequentially Fair Mechanism for Multiple Sensitive Attributes.

Snow in Montréal (Canada)

Winter started a bit more than one month ago… but we have already experienced many snow storms… there is still a lot snow in gardens and in the streets,

I was wondering if it was that unusual, but apparently not. Compared with last year, it is (for the first months of winter, until the end of Januray), it +50%, but it is comparable with previous years

Yes, we a simple loop, we can easily extract data from official wesite https://climat.meteo.gc.ca/ (but not too far away, even 2015 contains a lot of missing observations). For this month, we use

url = "https://climat.meteo.gc.ca/climate_data/daily_data_f.html?StationID=51157&timeframe=2&StartYear=1840&EndYear=2023&Day=30&Year=2023&Month=1#"
library(XML)
library(stringr)
download.file(url,destfile = "M.html")
tables=readHTMLTable("M.html")
k = which(tables[[1]]$`JOUR `=="Somme")
neige = tables[[1]]$`Neige tot. Definitioncm `[k]
x = as.numeric(sub(",", ".", strsplit(neige, "LegendCarer")[[1]][1], fixed = TRUE))

and then we loop, and store the number we look for in a data frame (yes, we have to convert “50,8LegendCarer^” into the appropriate numerical value (that would be here 50.8

D = data.frame(annee = c(2023,rep(2022:2015,each=12),c(12,11,10)), mois= c(1,rep(12:1,8),12,11,10), lab = neige, snow = x)
for(i in 2:nrow(D)){
    y = D$annee[i]
    m = D$mois[i]
    url = paste("https://climat.meteo.gc.ca/climate_data/daily_data_f.html?StationID=51157&timeframe=2&StartYear=1840&EndYear=2023&Day=30&Year=",y,"&Month=",m,"#",sep="")
  download.file(url,destfile = "M.html")
  tables=readHTMLTable("M.html")
  k = which(tables[[1]]$`JOUR `=="Somme")
  neige = tables[[1]]$`Neige tot. Definitioncm `[k]
  x = as.numeric(sub(",", ".", strsplit(neige, "LegendCarer")[[1]][1], fixed = TRUE))
  D[i,3] = neige
  D[i,4] = x
}

Here are the most recent months

> head(D)
  annee mois              lab snpw
1  2023    1 50,8LegendCarer^ 50.8
2  2022   12             63,0 63.0
3  2022   11             14,6 14.6
4  2022   10              0,0  0.0
5  2022    9              0,0  0.0
6  2022    8              0,0  0.0

Of course, we need some codes to plot, we here, I mainly wanted to keep tracks of the code used to extract meteorological data…

 

Montréal AI Symposium 2022

In about ten days (Saturday afternoon), I will be presenting a poster on fairness, discrimination and insurance at the Montréal AI Symposium, based on our joint paper The Fairness of Machine Learning in Insurance: New Rags for an Old Man?, written with Laurence Barry. Since the paper was quite literary, I used material from the document Insurance: Discrimination, Biases & Fairness to get more a visual poster. Additional information will come while discussing…

This is a poster used at the conference

Radial Graphs for Time Series

On How to: Weather Radials, there was a nice visualisation of temperatures. Since I am too old fashioned for ggplot2, I wanted to reproduce a similar graph with the old plot style.

Assume that daily temperature is in a vector X (e.g. temperature in Montréal, QC, in 2009). To get a radial plot, use

> n=length(X)
> theta=seq(0,1-1/n,length=n)*2*pi
> r=30+X
> plot(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",xlab="",ylab="",axes=FALSE)
> for(t in 1:n){
+   if(X[t]>0) CL=rgb(0,0,1,.4)
+   if(X[t]<0) CL=rgb(1,0,0,.4)
+   if(X[t]==0) CL="white"
+   segments((30+X[t])*cos(pi/2-theta[t]),(30+X[t])*sin(pi/2-theta[t]),30*cos(pi/2-theta[t]),30*sin(pi/2-theta[t]),col=CL)
+ }
> for(r in 10*seq(0,6)) lines(r*cos(pi/2-theta),r*sin(pi/2-theta),type="l",col="light blue")

Crowded Cities, Paris, Hong Kong and Montréal

Over the past years, I’ve been living in different cities, all of them being completely different, compared with the others. I have been living in Paris, which is a big city in Europe, with a large suburban neighborhood, too (la banlieue).

Then, I’ve been living in Hong Kong, which is a larger city, in Asia.

It was crowded. I mean, it was the feeling I had, while I was living there. And more recently, I’ve been living in Montréal, in North America. Montreal is a large city. Or to be more specific, an island,

The three cities are quite different. Paris, 2.211 million unhabitants, and 105,4 km² (density 21,057 unhabitants per km²). Montréal, 1.621 million unhabitants, and three times wider 365.1 km² (density 4,441 unhabitants per km²). Hong Kong, 7.234 million unhabitants, and again three times wider 1,104 km² (density 6,553 unhabitants per km²). In Hong Kong, there are several hill where it is not possible to build anything: on a large part of the island, the density is null.

Continue reading Crowded Cities, Paris, Hong Kong and Montréal

A random walk ? What else ?

Consider the following time series,

What does it look like ? I know, this is a stupid game, but I keep using it in my time series courses. It does look like a random walk, doesn’t it ? If we use Philipps-Perron test, yes, it does,

> PP.test(x)

	Phillips-Perron Unit Root Test

data:  x 
Dickey-Fuller = -2.2421, Truncation lag parameter = 6, p-value = 0.4758

If we look at the autocorrelation function, we do observe some persistence,

> acf(x,100)

Perhaps this persistence can be related to long range dependence, or to some fractional random walk. A natural idea could be estimate Hurst parameter, using for instance Beran (1992) estimator – based on Whittle (1956) – where we assume that the autocorrelation function satisfies

as  for some  (the so called Hurst index). But here, we start to observe unexpected ouputs,

> library(longmemo)
> (d  <- WhittleEst(x))
'WhittleEst' Whittle estimator for  fractional Gaussian noise ('fGn');	 call:
WhittleEst(x = x)
	  time series of length  n = 759.

H = 0.9899335
coefficients 'eta' =
    Estimate Std. Error z value   Pr(>|z|)
H 0.98993350 0.02468323 40.1055 < 2.22e-16
 <==> d := H - 1/2 = 0.49 (0.025)

 $ vcov       : num [1, 1] 0.000609
  ..- attr(*, "dimnames")=List of 2
  .. ..$ : chr "H"
  .. ..$ : chr "H"
 $ periodogr.x: num [1:379] 1479.3 1077.3 371.7 287.2 51.2 ...
 $ spec       : num [1:379] 62.5 31.7 21.3 16.1 12.9 ...

or more precisely some non-expected values for Hurst parameter, which should be in 

> confint(d)
      2.5 %   97.5 %
H 0.9415553 1.038312

Oops, perhaps, we did miss something, because it looks like there is extremely strong persistence on our time series,

> plot(d)

It is probablty time to ask where I found that series… To be honest, I did borrow  it from a great canadian website http://climate.weatheroffice.gc.ca/climateData/. For instance, it you want the temperature we did experience a few days ago, you can use

> Y=2013
> M=1
> D=25
> url=paste(
"http://climate.weatheroffice.gc.ca/climateData/hourlydata_e.html?
timeframe=1&Prov=QC&StationID=5415&hlyRange=1953-01-01|2013-02-
01&Year=",Y,"&Month=",M,"&Day=",D,sep="")
> page=scan(url,what="character")

Yes, that series is the temperature we did experience in Montréal last month (hourly time seies). On the graph below, you can actually compare it with temperature experienced in Januarys over the past 60 years,

So it is not that surprising to see long range dependence models appearing (I did write a paper on that topic precisely a few years ago). What I found puzzeling is that persistence is large, extremely large. And the problem is that I do not see how we can explain ‘jumps’ that we do observe on that series. For instance the behavior of the series while I was in Europe, before January 20th: within 3 days, the temperature went down, from 0°C to -20°C, and up from -20°C to 0°C, and then down again, from 0°C to -20°C (a nice И if we use cyrillic letters). Or how can we explain the oscillating behavior observed the week after, where the temperature went up, from -25°C to (almost) +10°C in a few days. Within 10 days, we did observe also two ‘jumps’ (or ‘crashes‘ if we want to use the terminology of financial time series) with a decrease of 25 degrees in less than 24 hours ! Obviously, we need to find other classes of model to replicate that kind of behavior we observe on temperatures…