Tag Archives: exponential

The taboo of the exponential

For Carl Sagan, “if you understand exponentials, the key to many of the secrets of the universe is in your hand”. But not everyone seems ready to unlock the secrets of the universe. Thus, in mid-November 2021, more than 18 months after the beginning of the SARS-COVID 19 pandemic, the Minister of Health stated “the circulation of the virus has accelerated for a few weeks now, with an increase of 30% to 40% per week. We are not yet in a so-called exponential phase” (quoted in Ouest France (2021)). Since an increase at a constant rate is precisely the definition of “exponential growth”, one may wonder about this statement, which reveals either a lack of numeracy on the part of our leaders, or an element of language, with the word “exponential” becoming a taboo word that should not be mentioned?

(this is the English tranlation of a post published in January 2022)

Continue reading The taboo of the exponential

R0 and the exponential growth of a pandemic, an update

A few days ago, I wrote a blog post – R0 and the exponential growth of a pandemic – where I was trying to generate some visualization of some exponential growth, in the context of a pandemic. After giving some thoughts, the previous graph might not be the best one to see an exponential based contagion.

Having graphs evolving, from the left to the right, gives us the (false) idea of some temporal evolution. Which is no necessarily correct. It simply means that contaminated people will contaminate other people, and we look at the number of iterations here. So maybe some concentric dots would look better.

And from a technical perspective, what I did was fun, but probably too complicated. In my previous post, I wanted to pack optimally k identical disks intro a unit circle. On http://hydra.nat.uni-magdeburg.de/packing, it was possible to get the “best known packings of equal circles in a circle”, with the coordinates. But as we will see, we can use something much more simple here.

My idea is now to create some picture like one below, with concentric colored dot. In the center, we have the first people that were contaminated, and then, we can see the transmission, somehow

From a technical perspective, here, I use a different strategy. I decided to draw random points, uniformly. The problem with randomness is the natural high discrepancy, with monte carlo methods: it is very likely that some disks will overlap. It is not a major issue, but it might distort the message. So I decided to use some low-discrepancy sequences, such as Halton‘s sequence.

library(randtoolbox)
S = halton(n=5000, dim = 2)*2-1

Here, I have disk coordinates in [-1,+1]^2. Then, to get disks in a circle, I simply compute the distance to the origin (0,0),

D0 = S[,1]^2+S[,2]^2

and take the ranks. If I want to visualize k=200 people, I consider the 200 smaller ranks. To get concentric circles, each part having k_i individuals, I use as thresholds R_0^{\bar k_{i-1}},R_0^{\bar k_{i}},R_0^{\bar k_{i+1}}, etc, where \bar k_i=\bar k_{i-1}+k_i,

R0 = rank(D0,ties.method = "random")
C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000)

where

R0=1.8
k=round(R0^(seq(1,9,by=2)))

Then we can plot the dots, with appropriate colors,

points(S,pch=19,col=colrpal[C0],cex=.75)

And of course, we can try that with different values, for R_0

R0=2.2
k=round(R0^(seq(1,9,by=2)))
kmax=max(k)  
S = halton(n=5000, dim = 2)*2-1
plot(S,col="light yellow",axes=FALSE,xlab="",ylab="",xlim=c(-1.3,1),ylim=c(-1,1),cex=.75,pch=19)
D0 = S[,1]^2+S[,2]^2
R0 = rank(D0,ties.method = "random")
C0 = as.numeric(cut(R0,c(0,cumsum(k)+.5)),100000)
points(S,pch=19,col=colrpal[C0],cex=.75)

“A 99% TVaR is generally a 99.6% VaR”

Almost 6 years ago, I posted a brief comment on a sentence I found surprising, by that time, discovered in a report claiming that

the expected shortfall […] at the 99 % level corresponds quite closely to the […] value-at-risk at a 99.6% level

which was inspired by a remark in Swiss Experience report,

expected shortfall […] on a 99% confidence level […} corresponds to approximately 99.6% to 99.8% Value at Risk

Continue reading “A 99% TVaR is generally a 99.6% VaR”

More significant? so what…

Following my non-life insurance class, this morning, I had an interesting question from a student, that I will try to illustrate, and reformulate as accurately as possible. Consider a simple regression model, with one variable of interest, and one possible explanatory variable. Assume that we have two possible models, with the following output (yes, I do hide interesting parts here, but it is to get quickly to my student’s point)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the first model – a GLM with some distribution, and some link function – and

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

for the second one – with another GLM, with another distribution, but the same link function (I guess I could have changed it, but it does not really matter here). Then, I got the following statement “I would like to choose the first model because the explanatory variable is more significant, and therefore, this model should have a stronger predictive power“.

That’s a nice idea, isn’t it ? Actually, I guess this is why I love teaching, because I will never be able to think about such an idea by myself. Because when you look at that statement, somehow it could make sense. Except that from my point of view, it is not valid at all. My first thought was to recall is standard example in statistical inference : you cannot not claim that a distribution is better than another one just by looking at the parameter estimates.

> fitdistr(Y,"normal")
      mean          sd    
  0.93685011   0.90700830 
 (0.06413517) (0.04535042)
> fitdistr(Y,"exponential")
      rate   
  1.06740661 
 (0.07547704)

Can I claim that the Gaussian distribution is better than the exponential one because parameter estimates have smaller standard deviation ? Because somehow, this is what we did when we claimed previously that the first model was better than the second one.

Let me get back on the outputs of the two regressions, and let me explain what I did. Actually, I wanted to have a story close to the one on the Gaussian versus exponential fit. So I did generate some exponential random variable,

> set.seed(5)
> n=200
> U=runif(n); 
> Y=-log(U)

Here, we can visualize the histogram of this sample, as well as the the estimated exponential distribution

> hist(Y,proba=TRUE,col="light green",border="white",lwd=2,breaks=seq(0,5.3333333333333,by=.333333333))
> x=seq(0,6,by=.02)
> lines(x,dexp(x,1/mean(Y)),col="red",lty=2)

On top of that, let us fit a gamma distribution. Using a GLM (where the regression is here on a constant – only), just to practice because later on, we will use a gamma regression on that variable

> reg0=glm(Y~1,family=Gamma(link="identity"))
> a=reg0$coefficient
> b=summary(reg0)$dispersion
> lines(x,dgamma(x,shape=1/b,scale=a*b),col="blue")

Now, we need a covariate, to run some regressions. What I wanted is some variable slightly correlated with our previous variable. Slightly, just to make sure that our -value in the regression will be close to 5% or 10%. So here, I did generate a variable so that the pair has Clayton copula, with coefficient 0.1 (which is small, extremely small)

> a=.1
> set.seed(5)
> n=200
> U=runif(n); 
> V=(U^(-a)*(runif(n)^(-a/(1+a))-1)+1)^(-1/a)
> Y=-log(U)
> X=qnorm(V)

To visualize the copula of the variables, we can use

> cop=function(u,v){
+ (a+1)*(u*v)^(-(a+1))*
+ (u^(-a)+v^(-a)-1)^(-(2*a+1)/a) }
> x=y=seq(.05,.95,by=.05)
> z=outer(x,y,cop)
> mat=persp(x,y,z,col="green",shade=TRUE,xlim=c(0,1),ylim=c(0,1),zlim=c(0,2),theta=-30,
+ ticktype ="detailed",zlab="")

We should be not far away from the independence (actually, there is a negative – significant – correlation (Pearson’s correlation)). Now, consider two models,

  • a Gaussian model (here a standard linear model)
  • a gamma model, with a linear link function

The outputs are the following (you will recognize the outputs given previously)

> reg1=lm(Y~X)
> reg2=glm(Y~X,family=Gamma(link="identity"))
> summary(reg1)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92883    0.06391  14.534   <2e-16 ***
X           -0.12499    0.06108  -2.046   0.0421 *  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.9021 on 198 degrees of freedom
Multiple R-squared:  0.02071,	Adjusted R-squared:  0.01576 
F-statistic: 4.187 on 1 and 198 DF,  p-value: 0.04206

> summary(reg2)

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.92901    0.06270  14.817   <2e-16 ***
X           -0.09883    0.05816  -1.699   0.0909 .  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

(Dispersion parameter for Gamma family taken to be 0.9086447)

    Null deviance: 229.72  on 199  degrees of freedom
Residual deviance: 226.58  on 198  degrees of freedom
AIC: 379.22

Number of Fisher Scoring iterations: 10

And here are the two predictions,

So, which model should we use? As usual, my answer will be “let’s have a look at the data” instead of looking only at tables of figures. Using some code posted a few days ago, let us visualize the two regressions. The Gaussian model is here

(for the lower part, I do not go below 0 since we do have, here, a positive variable that we would like to model) while the gamma on is here

And if we believe that the explanatory variable has no predictive power (since we can claim that the parameter is not significant in the regression), and we remove it from the regression, we get

Here, I do believe that the gamma (not to say the exponential) model is better because it is clearly more coherent with properties of the variable of interest. I trust more the confidence interval obtained above on the gamma model, than the one obtained with a Gaussian distribution. Even if the parameter in the regression is “more significant”.

Fisher-Tippett theorem and limiting distribution for the maximum

Tomorrow, we will discuss Fisher-Tippett theorem. The idea is that there are only three possible limiting distributions for normalized versions of the maxima of i.i.d. samples https://freakonometrics.hypotheses.org/files/2018/02/max-00.gif. For bounded distribution, consider e.g. the uniform distribution on the unit interval, i.e. https://freakonometrics.hypotheses.org/files/2018/02/max-09.gif on the unit interval. Let https://freakonometrics.hypotheses.org/files/2018/02/max-10.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-11.gif. Then, for all https://freakonometrics.hypotheses.org/files/2018/02/max-12.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-13.gif,

https://freakonometrics.hypotheses.org/files/2018/02/max-14.gif

i.e. the limiting distribution of the maximum is Weibull’s.

set.seed(1)
s=1000000
n=100
M=matrix(runif(s),n,s/n)
V=apply(M,2,max)
bn=1
an=1/n
U=(V-bn)/an
hist(U,probability=TRUE,,col="light green",
xlim=c(-7,1),main="",breaks=seq(-20,10,by=.25))
u=seq(-10,0,by=.1)
v=exp(u)
lines(u,v,lwd=3,col="red")

For heavy tailed distribution, or Pareto-type tails, consider Pareto samples, with distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-05.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-06.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-07.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-08.gif

which means that the limiting distribution is Fréchet’s.

set.seed(1)
s=1000000
n=100
M=matrix((runif(s))^(-1/2),n,s/n)
V=apply(M,2,max)
bn=0
an=n^(1/2)
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(0,7),main="",breaks=seq(0,max(U)+1,by=.25))
u=seq(0,10,by=.1)
v=dfrechet(u,shape=2)
lines(u,v,lwd=3,col="red")

For light tailed distribution, or exponential tails, consider e.g. a sample of exponentially distribution variates, with common distribution function https://freakonometrics.hypotheses.org/files/2018/02/max-01.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-02.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-03.gif, then

https://freakonometrics.hypotheses.org/files/2018/02/max-04.gif

i.e. the limiting distribution for the maximum is Gumbel’s distribution.

library(evd)
set.seed(1)
s=1000000
n=100
M=matrix(rexp(s,1),n,s/n)
V=apply(M,2,max)
(bn=qexp(1-1/n))
log(n)
an=1
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Consider now a Gaussian https://freakonometrics.hypotheses.org/files/2018/02/max-17.gif sample. We can use the following approximation of the cumulative distribution function (based on l’Hopital’s rule)

https://freakonometrics.hypotheses.org/files/2018/02/max-15.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-16.gif. Let https://freakonometrics.hypotheses.org/files/2018/02/max-18.gif and https://freakonometrics.hypotheses.org/files/2018/02/max-19.gif. Then we can get

https://freakonometrics.hypotheses.org/files/2018/02/max-20.gif

as https://freakonometrics.hypotheses.org/files/2018/02/max-21.gif. I.e. the limiting distribution of the maximum of a Gaussian sample is Gumbel’s. But what we do not see here is that for a Gaussian sample, the convergence is extremely slow, i.e., with 100 observations, we are still far away from Gumbel distribution,

and it is only slightly better with 1,000 observations,

set.seed(1)
s=10000000
n=1000
M=matrix(rnorm(s,0,1),n,s/n)
V=apply(M,2,max)
(bn=qnorm(1-1/n,0,1))
an=1/bn
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,15,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Even worst, consider lognormal observations. In that case, recall that if we consider (increasing) transformation of variates, we are in the same domain of attraction. Hence, since https://freakonometrics.hypotheses.org/files/2018/02/max-22.gif, if

https://freakonometrics.hypotheses.org/files/2018/02/max-23.gif

then

https://freakonometrics.hypotheses.org/files/2018/02/max-24.gif

i.e. using Taylor’s approximation on the right term,

https://freakonometrics.hypotheses.org/files/2018/02/max-25.gif

This gives us normalizing coefficients we should use here.

set.seed(1)
s=10000000
n=1000
M=matrix(rlnorm(s,0,1),n,s/n)
V=apply(M,2,max)
bn=exp(qnorm(1-1/n,0,1))
an=exp(qnorm(1-1/n,0,1))/(qnorm(1-1/n,0,1))
U=(V-bn)/an
hist(U,probability=TRUE,col="light green",
xlim=c(-2,7),ylim=c(0,.39),main="",breaks=seq(-5,40,by=.25))
u=seq(-5,15,by=.1)
v=dgumbel(u)
lines(u,v,lwd=3,col="red")

Will I ever be a bayesian statistician ? (part 2)

A few weeks ago, I started a series of posts on the magic of bayesian statistics from the eyes of a muggle (see http://freakonometrics.hypotheses.org/2191). It might be time to go a bit further…. And today, I wanted to discuss the choice of the a priori distribution of the parameter (which was mentioned in the commentary of the previous post). As far as I understood, there are several houses with different ideas on how to choose it.

  • The conjugate house

The first idea (used in the previous post, here) is to consider an exponential distribution. To be formal, those distributions can be written as

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-10.gif

(in a form as general as possible), i.e.

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-11.gif

Here http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif is somehow the new parameter of the distribution. Then, a conjugate priorhttp://freakonometrics.blog.free.fr/public/perso2/bayes-prior-12.gif for the parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif of the exponential family is given by

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-15.gif

where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-16.gif (where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-17.gif is the dimension of http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-13.gif).

The conjugate prior is interesting since, when combined with the likelihood (and normalized), produces a posterior distribution which is of the same type as the prior. And a lot of standard distributions have a conjugate prior. E.g.

  • For a Bernoulli distribution, i.e. http://freakonometrics.blog.free.fr/public/perso2/conj-00c.gif are i.i. with distribution http://freakonometrics.blog.free.fr/public/perso2/conju-01.gif, assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, , then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conju-03.gif
  • For a binomial distribution,  http://freakonometrics.blog.free.fr/public/perso2/conju-05.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conj-06.gif
  • For a Negative Binomial distribution, http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-20.gif,  assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posterioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/conj-08.gif
  • For a Poisson distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-09.gif, assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conj-10.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gifis still gamma, with parameter
http://freakonometrics.blog.free.fr/public/perso2/conj-13.gif
  • For a Geometric distribution,http://freakonometrics.blog.free.fr/public/perso2/conj-15.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/conju-02.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Beta, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-22.gif
  • For an Exponential distributionhttp://freakonometrics.blog.free.fr/public/perso2/conj-23.gif assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif ishttp://freakonometrics.blog.free.fr/public/perso2/bayes-prior-21.gif , then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-22.gif
  • For a Gaussian distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-28.gif assume that http://freakonometrics.blog.free.fr/public/perso2/conj-31.gif, then
http://freakonometrics.blog.free.fr/public/perso2/conj-35.gif

  • For a gamma distribution, http://freakonometrics.blog.free.fr/public/perso2/conj-43.gif, assume that the  a prioridistribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is  http://freakonometrics.blog.free.fr/public/perso2/bayes-000000.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-23.gif
  • For a Pareto distribution, , assume that the  a priori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00b.gif is http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-21.gif, then the a posteriori distribution of http://freakonometrics.blog.free.fr/public/perso2/conj-00.gif is still Gamma, with parameters

http://freakonometrics.blog.free.fr/public/perso2/bayes----ooooo.gif

  • The non-informative or vague house

So far, the choice of the prior was not neutral, in the sense that the a priori of the statistician will have an influence on a posteriori distributions (we’ll discuss that point further later on). We could be interested by the case where http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-12.gif is somehow t neutral. A famous example is the case of  http://freakonometrics.blog.free.fr/public/perso2/conju-01.gifdistribution. Between 1745 and 1784, Pierre Simon Laplace observed 393,386 birth of boys versus 377,555 birth of girls (or 251,527 boys versus 241,945 girls if we consider the initial article, for birth before 1770). He wanted to quantify the probability that p, the provability to have a boy, exceed 1/2. He assume that a priorip was uniform on the unit interval claiming that it was being as neutral as possible. But it is not that correct.
The idea of noninformative prior is that we should get an equivalent result when considering a transformed parameter. So assume that the parameter is no longer http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-03.gif, but http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif, where http://freakonometrics.blog.free.fr/public/perso2/bayes-----0000.gif (for some bijective transformation). The distribution (density) of http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif is then

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-01.gif

Let http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-02.gif denote Fisher information of parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-03.gif,

http://freakonometrics.blog.free.fr/public/perso2/bayes-priori-04.gif

Then, Fisher information of parameter http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-05.gif is

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-06.gif

which can be written

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-07.gif

So if we want a distribution invariant by transformation of the parameter (http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-08.gif), it seems natural to consider

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-09.gif

or at least something proportional to that square root, since we want to get a density.
Thus, from Jeffrey’s principle, the prior distribution for a single parameter is noninformative if it is taken proportional to the square root of Fisher’s information measure. For those who want to go further, see Noninformative Priors Do Not Exist or  A Catalog of Noninformative Priors.

  • For the Poisson distribution, the Jeffreys prior for the rate parameter  is
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-poisson.gif
  • For the Bernoulli distribution, the Jeffreys prior for the probability parameter  is
http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-binomial.gif

This is the arcsine distribution and is a beta distribution with parameters 1/2.

  • The expert house

The idea is quite simple. We need a prior distribution so that

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-32.gif

But assume that we have already seen a similar problem before. For instance, I remember Eric Parent mentioning the case of river flow models. If we have two similar rivers, then it might be interesting to use information on one river as an a prioriinformation for the second one, something like

http://freakonometrics.blog.free.fr/public/perso2/bayes-prior-33.gif

I guess it is also possible to use meta-regression to get an aggregation of experts opinion.

To go further on bayesian statistics, I suggest to go on Albus Dumbledore’s og,here, or the the blog of some PhD (and postdoc) students in Hogwarts, there. Or if you can wait, a dozen other posts will come soon (well, let’s hope so). The next one will probably be on a posteriori calculations (which is the natural step since we’ve seen a priori choice here).