Diary of an addict

After four days offline (at least off my blog, see the previous post for more details), I have to face the truth: I am a computer addict. For sure. Here is the diary of the last four days, that were supposed to be without touching my computer, at work and at home. I tried to keep tracks of everytime I had to go on my computer. At home, that was fine (I have decided a few weeks ago that I should not check my email at home, in the evening, and in the morning, so unless I want to read the news, check on Twitter what’s going on, or write a post on my blog, I do not usually spend much time on our computer). But at the office, that’s another story…

  • Tuesday, April 2nd

6:17 Wake up, first day of the challenge.

8:17 Time to check if the code used for datascraping (on some websites) did run… It’s not like using my computer. It was just fixing problems for a future research. One minute, just cheking. Well, there was a problem in my code, I have to fix it (it takes much longer than I thought) and then, I ran it again (in order to exact some figures out of almost 200,000 internet pages). One code has been scaping a website for almost a week (and it looks like I have only one third of the data), and the other one, I have to run it everyday, to backup some daily figures from several websites.

8:32 While going to get a coffee, JF shows me the web site of the Antartica Journal of Mathematics (where he was kindly invited to submit an article, and also to apply if he’s willing to join the board) and we go through Rob Hyndman’s warning on his blog, about junk journals. No offense to the webdesigner hired by this journal, but we had a lot of fun on that website… so… nineties.

8:47 I have to go online, on my email account to download a paper I have to review. I had a reminder by an editor this weekend. Print the article. While seeking for the email, I notice that I did receive during the night an invitation to give my opinion about another article, for another journal. Just go briefly through the paper. Even if my (personal) quota for 2013 is already exceeded, I decided to accept to write a referee report. Also check something with a co-author. Quickly. Also read four comments on the blog submited during the night, and approve all of them. Damned! one of them mentioned a preprint related so something discussed in a post. Try to avoid to read the preprint. Go offline. Definitively. Turn on the music. Sacred music, namely In Seculum Longum.

9:39 After alsmost 30 minutes reading articles, I give up. I open LaTeX to type corrections for a couple of chapters I have to write down (and send them as soon as possible, the deadline was last week). So far, no internet. I take a gum, I feel nervous…

9:44 Go online to check how to use enquote{} in latex (it looks like I have some damned babel problems !) Go on several forums. Spent the morning working on my LaTeX files.

12:54 Back from lunch, I turn off the sacred music, swith to Sandinista, by The Clash,

13:14 JF want to check with me the schedule for my graduate courses for Winter 2014 session, one on extreme values and copulas, and one on time series. Have to go on the website of the university, to find what has been planed. Spend also some time seeking for old emails, since the information is only partially online.

14:02 Have to go online, one more time, to buy a ticket for the French railway, for a colleague of mine.

14:17 Mathieu asks me go check the emails received this morning, and to send an email in order to confirm that I will join a meeting about the organisation of a workshop.

14:40 Discussion with Mathieu about resubmiting an article. We need to go on my email account to check several (interesting) comments from the referees.

15:03 This time, I have to launch the internet browser. I have to find out how to avoid fist names in a bibliography, using a bib file, and get only initials. Some more time on LaTeX forums.

15:08 Remember that I have to book a room for a colleague, visiting me this August. Need to check the price on the webiste, and send emails.

15:19 Wrote a recommendation for a former student of mine. Send it.

15:26 Discussion by email with two co-authors, since we did plan to work on a joint paper this month.

15:48 Time to work on slides for a conference at the end of this month, and write codes to produce some graphs. So, still on my computer. But not (really) online.

Finally, I went back home early, to cook, and spend some time with the kids.

  • Wednesday, April 3rd

Second day.

8:17 Check again my R code. Run again one of them. Looks like there was a problem.

8:50 Go to give my course. Until noon, I spent the morning producing codes, to show how to compute chain ladder estimates, and explain roots of bootstrap in regression models.

13:03 Buzy red light on my phone when I get back to my office: it looks like some people at the faculty tried to reach me. Usually I do not answer the phone. I hate my phone: you want to reach me, send me an email. OK, in my mailbox, there are a couple of emails from the faculty “please, call us back, regarding the conference you plan to organize”. Damned, how can I tell them I am on a mission, that I try to avoid using my email (and that I have to deal with my phone-phobia at the same time).

13:16 Go online to check code for an R package I want to use to produce nice graphs

13:30 Check quickly my emails, delete 90% of them, answer to one, postpone others. Decide to go to work in a coffee shop, the whole afternoon. While taking the elevator, I started to discuss with a colleague. Looks like I missed an email about a meeting that will take place in the afternoon. I want to work on my chapters. I go to the coffee shop. Nothing serious in the meeting, as far as I understood. I spent the afternoon reading, checking typos in chapters that should be sent soon, and articles on advanced methodes in finance, based on trees… No computer for a few hours ! I did it !

17:49 I have to go online at home, time to pay my HydroQuébec bill. Damned !

17:56 While I am about to log off, I received an email from a former student of mine, with a link to a nice article (entitled “l’informatique, ça s’apprend“) I really want to share it. But I can’t. Less than 36 hours after my mission, that would be a defeat ! Do not go on Twitter !

18:02 Cooking for the kids. Will miss the Montreal Hackathon organized by R users. Wednesdy evening is not a great day to join those social events (could geek meetings be called social ?). My wife has a late course in the evening, and I still try to see how deeply addicted I can be. I finally decided that I will check my emails twice a day. But just to remove spams (or messages like “I am a student in a engineering program from India and I would like to start a PhD with you“, or “the back door in one of the building will be locked during the week-end from 22:15 till 23:42 for security reasons“) and to see if there might be important ones. Probably have to answer some of them, but I’ll try to postpone for most of them.

18:41 started a game of kid audio with the girls. Almost became a pugilat when they started to argue about kettledrum and snare-drum, I wanted to show them on youtube, but finally I gave up (there is an old saying: never interfere in a girl-and-girl fight). Decided to ask the elder to read a story to the youngest, while I was washing the dishes (which is usually the perfect timing for a dvd). Meanwhile, it looks like my son went online, for his music assignment: his teacher is using online videos to help them practicing. Argggg.

  • Thursday, April 4th

Third day.

8:29 As usual, checking the code for datascraping… reload the one that did crash (again) during the night

Error in substr(html, 8, 12) :
  invalid multibyte input string at '<e9>lair,'

Damned. Moving around 200,000 pages without being caught is difficult. Have to play some music. Gonzales, piano solo.

8:47 Have to check quickly my emails. The problem is that, on average, per week day, I have a bit more than 100 emails (excluding official spam). If I do now scan them, I end up with on thousand emails very quickly… Need to moderate a comment on the blog.

8:50 Still online, checking my emails, bad news about fundings for a student of mine, have to send a couple of emails to find a backup solution.

8:54 Quick discussion by email about copyrights for a chapter in a book

8:54 Have to send also emails to book a room for a colleague who will visit me in August

8:55 Postpone a Skype discussion with a co-author, still trying to avoid unecessary use of my computer.

8:57 Answer an email to schudele a meeting because a student asked to get a grade revision, and an adhoc committee is necessary. Looks like I am part of it.

9:17 Start to write recommendation letters for a Christophe who’s applying for positions in several universities, in France.

9:34 Back on the slides and the R code. On my computer, but offline.

10:11 Email, brief answer to a former student of mine, who might be interesting to share some datasets, but it looks like there might some confidentiality issues. I wanted to work on those data with a student, in September. Have to find an alternative.

10:45 Discussion with student in master program (face to face this time). Have to go on Dropbox to download a pdf file he wrote, and to download a couple of paper to check the proofs.

13:05 Work with Amadou, my phd student, need to find a pdf version of a book, since the property is clearly given there, but the book is out of print (no way to get a paper copy). Also go online to find a reference on a complicated model.

14:40 Email from Fred, about a reserving technique that seems new, from a paper he just discovered.

15:03 Upload on slideshare some (old) slides that do the same thing that this new paper (aren’t there anyone checking before publishing papers that results are really new ?)

15:14 Play Rodrigo y Gabriela, need something punchy to finish my day

16:56 Received an email from the immigration department, and I have to go to their website to find a doctor for a medical examination of the whole family.

19:26 Request by email from financial services to get the exact amount (in Euros) taken from my credit card. Have to go on my bank account, online.

19:43 Disccussion by email with Frédéric about one year uncertainty and bootstrap with overdispersed Poisson models

  • Friday, April 5th

Last day of the test. Fourth day.

06:18 My daughter wakes up and tells me it is unfair to have snow for her birthday. Have to go on http://meteomedia.com/weather/… to check for weather forecasts. Hopefully, we should have nice weather this afternoon…

08:16 Once again, checking R codes, still running this time ! Great ! Time to play some music.  Air, Premiers Symptômes.

8:38 Email regarding next week jury, checking legal aspects

9:04 I have to print bank informations, that I did download yesterday, check orders placed on Amazon in the last 3 months, scan documents, send them to financial services.

9:13 Have to check for an account number with financial services

9:15 Brief email to some contributors of a book that I should edit this year.

9:21 Lauch Skype, have to discuss with coauthors in France.

11:54 Received an email about courses in September, looks like a quick answer is needed.

13:10 Update the syllabus for the course I will give in September. Decided to write that cellphone will not be allowed during the class.

14:12 Work with Ben, a master student, on a paper. Need to scan notes I have written down to send him by email (this time, we did work together without using my computer).

14:38 Finished my recommendation letters for Christophe (for positions in France, more than a dozen recommendations). Have to send them individualy by email. Have also to find some email adresses that I do not have.

14:49 Received an email claiming that I cannot give my (graduate) course on extreme value in 2014, not in the official programm. Have to spend some time checking why. It seems that it has been removed from the list, and that the code has changed (hopefully JF was online to check that information much more efficiently than I would).

14:59 Received an answer from one of the colleagues I just sent recommendation to. We used to be students at the same time, a few years ago. Write a short email back to give (personal) news.

16:19 Started to type sketches of what can be the final exam for my course on GLM for actuarial science. So far 5 questions. Need to find about 40…. Will take a while.

Mission aborted.

I finally left the office later on, to pick up my son and bring him to his fencing course (to prepare for the Jeux de Montréal that will take place tomorrow). Went back home, then, for a cake with candles for my daughter’s birthday. Later on, had to spend some time online, on the blog, for my students. And started to type this post. So, here was the story of the past four days. I have to admit that looking back at those four days is quite informative :

  1. I cannot work without a computer, I can hardly work offline. No only for my research, to get help from forums on R and LaTeX, or to seek articles (the time I spent in Paris at the library making photocopies of old articles is clearly over).
  2. I do not only need a computer to write R codes, and produce LaTeX slides and artices. I need a computer… for everything. To plan meetings, for social interactions with colleagues, to find some help, to find a theorem, to book an hotel, etc.
  3. I understand more clearly why I am so unproductive in terms of research ! it looks like I spend (I was about to say waste) a lot of time on administrative tasks. Small tasks. But adding a lot of small tasks, finally, it is difficult to have 4 or 5 consecutive hours to work exclusively on some research project.

Bootstrap et régression

Lors du dernier cours, on a évoqué l’utilisation du bootstrap pour obtenir des intervalles de confiance sur des prévisions. Je mets en ligne les codes tapés en cours (très sommairement commentés, je peux renvoyer vers des vieux billets du cours ACT6420 pour des compléments). On va travailler sur ma base préférée pour évoquer la régression linéaire (avant de parler triangles de provisionnement, revenons cinq minutes sur des choses simples).

> plot(cars)
> reg=lm(dist~speed,data=cars)
> abline(reg,col="red")
> n=nrow(cars)
> x=21
> points(x,predict(reg,newdata= data.frame(speed=x)),pch=19,col="red")

On cherche ici à faire une prédiction en un point. Comme rappelé en cours (mais aussi dans le cours de modèles de prévision), quand on veut donner un intervalle de confiance pour la prévision, il convient de distinger l’intervalle de confiance pour le prédicteur (qui va dépendre de l’erreur d’estimation des paramétres) et l’intervalle de confiance pour une potentielle valeur (on peut parler de génération de scénarios, qui va dépendre en plus de l’erreur de modèle, c’est à dire de la dispersion des résidus). Commençons par l’intervalle de confiance sur la prédiction, sur le best estimate comme on dit en provisionnement

> Yx=rep(NA,500)
> B=matrix(NA,500,2)
> for(s in 1:500){
+ indice=sample(1:n,size=n,
+ replace=TRUE)
+ base=cars[indice,]
+ #points(base,pch=3)
+ reg=lm(dist~speed,data=base)
+ abline(reg,col="light blue")
+ points(x,predict(reg,newdata=data.frame(speed=x)),pch=19,col="blue")
+ Yx[s]=predict(reg,newdata=data.frame(speed=x))
+ B[s,]=coefficients(reg)
+ }

Les valeurs bleues sont ici des prévisions possibles, obtenues en rééchantillonnant dans notre base d’observations. Pour rappel, l’intervalle de confiance (à 90%), sous hypothèse de normalité des résidus (et donc des estimateurs de la pente et de la constante de la droite de régression) s’obtient de la manière suivante

> reg=lm(dist~speed,data=cars)
> U=predict(reg,interval ="confidence",
+ level=.9,newdata=
+ data.frame(speed=0:30))
> lines(0:30,U[,2],col="red",lwd=2)
> lines(0:30,U[,3],col="red",lwd=2)

On peut comparer ici la distribution des valeurs obtenues sur nos 500 jeux de données générées, et comparer les quantiles empiriques, avec les quantiles sous hypothèse de normalité,

> hist(Yx,proba=TRUE,col="light blue",border="white")
> boxplot(Yx,horizontal=TRUE,at=.07,boxwex = 0.02,add=TRUE,col="light green")
> abline(v=U[x+1,2:3],col="red",lwd=2)
> D=density(Yx)
> lines(D)
> I=which(D$x<=quantile(Yx,.05))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
> I=which(D$x>=quantile(Yx,.95))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)

On peut noter que les ordres de grandeur sont comparables.

> reg=lm(dist~speed,data=cars)
> quantile(Yx,c(.05,.95))
      5%      95% 
58.63689 70.31281 
> predict(reg,interval ="confidence",
+ level=.9,newdata=data.frame(speed=x)) 
       fit      lwr      upr
1 65.00149 59.65934 70.34364

Regardons maintenant l’autre type d’intervalle de confiance, sur la valeur possible de la variable d’intérêt. Cette fois, en plus de tirer des nouveaux échantillons et calculer des prédictions, on va en plus rajouter un bruit à chaque tirage, qui permettra d’obtenir une valeur possible.

> Yx=rep(NA,500)
> for(s in 1:500){
+ indice=sample(1:n,size=n,
+ replace=TRUE)
+ base=cars[indice,]
+ #points(base,pch=3)
+ reg=lm(dist~speed,data=base)
+ erreur=residuals(reg)
+ #abline(reg,lty=2)
+ E=sample(erreur,size=1)
+ Yx[s]=predict(reg,newdata=data.frame(speed=x))+E
+ points(x,Yx[s],pch=19,col="red")
+ }

Là encore, on peut comparer (graphiquement pour commencer) les valeurs obtenues par rééchantillonnage, et celle obtenues sous hypothèse de normalité,

> hist(Yx,proba=TRUE,col="light blue",border="white")
> boxplot(Yx,horizontal=TRUE,at=.025,boxwex = 0.005,add=TRUE,col="light green")
> abline(v=U[2:3],col="red",lwd=2)
> D=density(Yx)
> lines(D)
> I=which(D$x<=quantile(Yx,.05))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)
> I=which(D$x>=quantile(Yx,.95))
> polygon(c(D$x[I],rev(D$x[I])),c(D$y[I],rep(0,length(I))),col="blue",border=NA)

Ce qui donne, numériquement, les comparaisons suivantes

> quantile(Yx,c(.05,.95))
      5%      95% 
44.43468 96.01357 
> (U=predict(reg,interval ="prediction",level=.9,newdata=data.frame(speed=x)))
       fit      lwr      upr
1 67.63136 45.16967 90.09305

On observe cette fois une légère asymméytrie vers la droite. Manifestement, on ne peut pas supposer les résidus Gaussien, car il y a plus de grandes valeurs positives, que négatives. Ce qui fait du sens compte tenu de la nature des données (une distance ne peut être négative).

On avait ensuite commencé à discuter de l’utilisation des modèles de régression en provisionnement. Afin d’avoir des données présentant de l’indépendance, on avait rappelé qu’il fallait travailler avec les incréments de paiments, et non pas les paiements cumulés.

> T
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 4372 4411 4428 4435 4456
[2,] 3367 4659 4696 4720 4730   NA
[3,] 3871 5345 5398 5420   NA   NA
[4,] 4239 5917 6020   NA   NA   NA
[5,] 4929 6794   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA
> n=ncol(T)
> Y=T
> Y[,2:n]=T[,2:n]-
+         T[,1:(n-1)]
> Y
     [,1] [,2] [,3] [,4] [,5] [,6]
[1,] 3209 1163   39   17    7   21
[2,] 3367 1292   37   24   10   NA
[3,] 3871 1474   53   22   NA   NA
[4,] 4239 1678  103   NA   NA   NA
[5,] 4929 1865   NA   NA   NA   NA
[6,] 5217   NA   NA   NA   NA   NA

On peut alors constituer une base de données, avec comme variables explicatives la ligne et la colonne.

> y=as.vector(as.matrix(Y))
> base=data.frame(
+ y,
+ ai=rep(2000:2005,n),
+ bj=rep(0:(n-1),each=n))
> 
> head(base,12)
      y   ai bj
1  3209 2000  0
2  3367 2001  0
3  3871 2002  0
4  4239 2003  0
5  4929 2004  0
6  5217 2005  0
7  1163 2000  1
8  1292 2001  1
9  1474 2002  1
10 1678 2003  1
11 1865 2004  1
12   NA 2005  1
> tail(base,12)
    y   ai bj
25  7 2000  4
26 10 2001  4
27 NA 2002  4
28 NA 2003  4
29 NA 2004  4
30 NA 2005  4
31 21 2000  5
32 NA 2001  5
33 NA 2002  5
34 NA 2003  5
35 NA 2004  5
36 NA 2005  5

On peut alors commencer par utiliser le modèle Regression models based on log-incremental payments de Stavros Christofides, basé sur une modélisation lognormale, introduite initialement par Etienne de Vylder en 1978 (Markus en parle, en trois parties, sur son blog http://lamages.blogspot.ca/Barnett%20Zehnwirth)

> reg1=lm(log(y)~
+ as.factor(ai)+
+ as.factor(bj),data=base)
> summary(reg1)

Call:
lm(formula = log(y) ~ as.factor(ai) + as.factor(bj), data = base)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.26374 -0.05681  0.00000  0.04419  0.33014 

Coefficients:
                  Estimate Std. Error t value Pr(>|t|)    
(Intercept)         7.9471     0.1101  72.188 6.35e-15 ***
as.factor(ai)2001   0.1604     0.1109   1.447  0.17849    
as.factor(ai)2002   0.2718     0.1208   2.250  0.04819 *  
as.factor(ai)2003   0.5904     0.1342   4.399  0.00134 ** 
as.factor(ai)2004   0.5535     0.1562   3.543  0.00533 ** 
as.factor(ai)2005   0.6126     0.2070   2.959  0.01431 *  
as.factor(bj)1     -0.9674     0.1109  -8.726 5.46e-06 ***
as.factor(bj)2     -4.2329     0.1208 -35.038 8.50e-12 ***
as.factor(bj)3     -5.0571     0.1342 -37.684 4.13e-12 ***
as.factor(bj)4     -5.9031     0.1562 -37.783 4.02e-12 ***
as.factor(bj)5     -4.9026     0.2070 -23.685 4.08e-10 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 0.1753 on 10 degrees of freedom
  (15 observations deleted due to missingness)
Multiple R-squared: 0.9975,	Adjusted R-squared: 0.9949 
F-statistic: 391.7 on 10 and 10 DF,  p-value: 1.338e-11 

> base$py=exp(predict(reg1,
+ newdata=base)+summary(reg1)$sigma^2/2)
> round(matrix(base$py,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 2871.2 1091.3 41.7 18.3  7.8 21.3
[2,] 3370.8 1281.2 48.9 21.5  9.2 25.0
[3,] 3768.0 1432.1 54.7 24.0 10.3 28.0
[4,] 5181.5 1969.4 75.2 33.0 14.2 38.5
[5,] 4994.1 1898.1 72.5 31.8 13.6 37.1
[6,] 5297.8 2013.6 76.9 33.7 14.5 39.3
> sum(base$py[is.na(base$y)])
[1] 2481.857

On obtient un montant un peu différent de celui obtenu par la méthode Chain Ladder, mais néanmoins comparable. On peut aussi tenter une régression de Poisson (avec un lien logarithmique), comme suggéré en 1975 par Hachemeister et Stanard,

> reg2=glm(y~
+ as.factor(ai)+
+ as.factor(bj),data=base,
+ family=poisson)
> summary(reg2)

Call:
glm(formula = y ~ as.factor(ai) + as.factor(bj), family = poisson, 
    data = base)

Deviance Residuals: 
    Min       1Q   Median       3Q      Max  
-2.3426  -0.4996   0.0000   0.2770   3.9355  

Coefficients:
                  Estimate Std. Error z value Pr(>|z|)    
(Intercept)        8.05697    0.01551 519.426  < 2e-16 ***
as.factor(ai)2001  0.06440    0.02090   3.081  0.00206 ** 
as.factor(ai)2002  0.20242    0.02025   9.995  < 2e-16 ***
as.factor(ai)2003  0.31175    0.01980  15.744  < 2e-16 ***
as.factor(ai)2004  0.44407    0.01933  22.971  < 2e-16 ***
as.factor(ai)2005  0.50271    0.02079  24.179  < 2e-16 ***
as.factor(bj)1    -0.96513    0.01359 -70.994  < 2e-16 ***
as.factor(bj)2    -4.14853    0.06613 -62.729  < 2e-16 ***
as.factor(bj)3    -5.10499    0.12632 -40.413  < 2e-16 ***
as.factor(bj)4    -5.94962    0.24279 -24.505  < 2e-16 ***
as.factor(bj)5    -5.01244    0.21877 -22.912  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

(Dispersion parameter for poisson family taken to be 1)

    Null deviance: 46695.269  on 20  degrees of freedom
Residual deviance:    30.214  on 10  degrees of freedom
  (15 observations deleted due to missingness)
AIC: 209.52

Number of Fisher Scoring iterations: 4

> base$py2=predict(reg2,
+ newdata=base,type="response")
> 
> round(matrix(base$py2,n,n),1)
       [,1]   [,2] [,3] [,4] [,5] [,6]
[1,] 3155.7 1202.1 49.8 19.1  8.2 21.0
[2,] 3365.6 1282.1 53.1 20.4  8.8 22.4
[3,] 3863.7 1471.8 61.0 23.4 10.1 25.7
[4,] 4310.1 1641.9 68.0 26.1 11.2 28.7
[5,] 4919.9 1874.1 77.7 29.8 12.8 32.7
[6,] 5217.0 1987.3 82.4 31.6 13.6 34.7
> 
> sum(base$py2[is.na(base$y)])
[1] 2426.985

La prédiction coïncide avec l’estimateur obtenu par la méthode Chain Ladder. Le lien avec les méthodes de biais minimal a été établi par  Klaus Schmidt et Angela Wünsche en 1998, dans Chain ladder, marginal sum and maximum likelihood estimation. La semaine prochaine, on parlera des méthodes de bootstrap pour obtenir des intervalles de confiance, ou des quantiles, sur les montants de réserve. Je ne sais pas si j’aurais le temps de taper des transparents, je préfère, sur cette partie du cours taper au fur et à mesure, et écrire au tableau. Je renvoie au Chapitre 3 du livre avec Christophe Dutang – en ligne sur http://cran.r-project.org/doc/contrib/ – pour le détail. C’est le code que je tape en cours, tout en essayant de répondre aussi aux questions.