A few years ago, Ryan J. Tibshirani published “Fast computation of the median by successive binning” with a nice lemma,
The Mean Is Within One Standard Deviation of Any Median
And a rather nice and simple proof is given.
More formally, If X is a random variable with mean \mu, variance \sigma^{2}, and median m, then m\in[\,\mu-\sigma,\;\mu+\sigma]. Write|\mu-m|=\bigl|\mathbb{E}(X-m)\bigr|thus, from Jensen’s inequality,\bigl|\mathbb{E}(X-m)\bigr|\leq\mathbb{E}\bigl|X-m\bigr|and because the median minimizes the function a\mapsto \mathbb{E}|X − a|,\mathbb{E}\bigl|X-m\bigr|\leq\mathbb{E}\bigl|X-\mu\bigr|and because|a|=\sqrt{a^2}, we can write\mathbb{E}\bigl|X-\mu\bigr|=\mathbb{E}\!\sqrt{(X-\mu)^{2}}and if we use the concave version of Jensen’s inequality,|\mu-m|\leq \sqrt{\mathbb{E}(X-\mu)^{2}}i.e.,|\mu-m|\leq\sigma
Nice proof, isn’t it.
Of course, the result is quite old, almost 100 years old… it seems that it first appeared in
Harold Hotelling & Leonard M. Solomons (1932) “The Limits of a Measure of Skewness” Annals of Mathematical Statistics. 3(2): 141-142

There were also a couple of references in the early 80’s,
Stephen A. Book & Lawrence Sher (1979) “How close are the mean and the median?” The Two-Year College Mathematics Journal, Vol. 10, No. 3, pp. 202-204
Warren Page & V. N. Murty (1982) “Nearness Relations Among Measures of Central Tendency and Dispersion: Part 1” The Two-Year College Mathematics Journal, Vol. 13, No 5, pp 315-327
but then, in the 90’s, Colm O’Cinneide mentioned that old papers from Harold Hotelling and Leonard Solomons

denote the parameter (of our parametric model, e.g. the tail index), and we would like to know whether
(where in the context of finite versus infinite mean
). I.e. either
or to its complementary
. Consider the maximum likelihood estimator
, i.e.
and
denote the constrained maximum likelihood estimators on
and
respectively,

and
(on the left), or
and
(on the right)







is the maximum of n random variables i.i.d. uniformly distributed on the unit interval
. I gave a hint last week about the cumulative distribution function for the maximum, i.e.
,



is a random variable with finite variance, then







as
. Here, it is then possible to get
, then
(see the prof of the central limit theorem we got a few days ago).
is the cumulative distribution of the
‘s (the random variables used to build up the maximum). This work since the 
