One of my son’s favorite nature documentary explains how deep-sea explorers have found there, within the last few decades, forms of life which are completely different from anything seen on the surface of the earth, or at more accessible depths in the ocean.
Now, in this spirit of adventure, I will discuss a recent preprint of J. Jacod, A. Nikeghbali and myself, where we try to find meaningful behavior in probabilistic phenomenon “beyond the standard limit theorems” — in particular, beyond the Central Limit Theorem. Moreover (and this is where, as a number-theorist, I find it most interesting) we find traces of these forms of life in a number of arithmetic situations. One was expected: it involves the analogy between Random Matrices and the Riemann zeta function, and it was the basic motivation behind this work (already explored previously in a slightly different way by A. Nikeghbali and M. Yor); the second was also natural: it is the finite field analogue of the previous one, where the Katz-Sarnak philosophy and techniques can be applied; and the third one was a little more unexpected: it lies in the context of the Erdös-Kác Theorem on the approximate normal distribution of the number of prime divisors of integers.
As often happens in mathematics, this example which we noticed last is rather simpler to introduce and explain than the others, so I’ll start with it. The Erdös-Kác Theorem states that for any real numbers a and b, with a<b we have
$latex \lim_{N\rightarrow +\infty}\ \frac{1}{N}|\{n\leq N\ |\ a<(\omega(n)-\log\log N)/\sqrt{\log\log N}<b \}|=\frac{1}{\sqrt{2\pi}}\int_a^b{e^{-t^2/2}dt,$
where ω(n) is the number of distinct prime divisors of n. This is phrased, informally, as saying that ω(n) is asymptotically distributed like a normal random variable with mean and variance both equal to (log log n), but more properly it is really the convergence in law of a normalized version
$latex \frac{\omega(n)-\log\log N}{\sqrt{\log\log N}}\ \ \ \ for\ n\leq N$
of ω(n) to a standard normal random variable, with mean 0 and variance 1. As such, this looks similar to the Central Limit Theorem, and there are indeed proofs which proceed explicitly using this idea (see this for instance).
Our main point is that, in this example and in others, one can extract meaningful behavior without doing the renormalization. And this is where a new type of convergence is necessary, since ω(n) in itself does not have a limiting distribution (when looked in the same sense as implicit above: taking n< N with N going to infinity).
This new type of convergence is expressed in a form involving the characteristic function (i.e., Fourier transform) of random variables. For the special case of ω(n), it is the following formula: there exists a continuous function Φ(u), defined for all real numbers u, with Φ(0)=1, and real numbers λN such that the limit
$latex (\diamond)\ \ \ \ \ \ \ \lim_{N\rightarrow +\infty}\ \ \exp(\lambda_N(1-e^{iu}))\ \ \frac{1}{N}\sum_{n\leq N}{\exp(iu\omega(n))}=\Phi(u)$
exists for all u, and is uniform on compact subsets. In fact, this is proved with an explicit formula for the limiting function and with
$latex \lambda_N=\log\log N.$
From this result, an easy argument with normalization shows that the normalized characteristic functions
$latex \frac{1}{N}\sum_{n\leq N}{\exp(iu(\omega(n)-\log\log N)/\sqrt{\log\log N})}$
do converge to that of a standard normal random variable, and this recovers exactly the Erdös-Kác theorem, using the characterization of convergence in law by means of characteristic functions (P. Lévy’s theorem).
The first factor of the left-hand side of the expression (◊) is the inverse of the characteristic function of a Poisson random variable with parameter λN, in other words of a random variable XN such that
$latex \mathbf{P}(X_N=k)=e^{-\lambda_N}\frac{\lambda_N^k}{k!}$
for non-negative integers k. So, moving heuristically this factor to the other side, we have approximately
$latex \frac{1}{N}\sum_{n\leq N}{\exp(iu\omega(n))}\simeq \exp(\lambda_N(e^{iu}-1)) \Phi(u)=\mathbf{E}(e^{iuX_N})\Phi(u)$
and using the basic properties of characteristic functions, things behave as if ω(n), for n<N, was a random variable which is the sum of the Poisson variable XN with parameter λN (which tends to infinity) and some (independent) term which has a limit as N goes to infinity. However, this is not literally true, because Φ(u) is not a characteristic function of a random variable. But because of this intuition, we say that there is mod-Poisson convergence, i.e., there is convergence, intuitively, modulo addition of Poisson random variables. (Note that this is not literally correct also because the set of Poisson random variables defined on a probability space is not a vector space, so the “modulo” has to be taken with a grain of salt).
This example gives the paradigm of what we may want to look at: starting with limiting theorem (in distribution) which involve some random variables (Yn) of interest, after normalization
$latex Z_n=\frac{Y_n-\mathbf{E}(Y_n)}{\sqrt{\mathbf{V}(Y_n)}}$
to have mean 0 and variance 1, we wonder whether a new stronger limiting theorem exists, where the normalization is avoided, but where the characteristic function of Yn is not considered “bare” but is divided by other suitable characteristic functions (which compensate for the possibly increasing variance of the original random variables in a different way than by looking at the standard normalization Zn).
In the case of the number of prime divisors, those were of Poisson type. But in fact, the first case where this behavior was explicitly noticed involved Gaussian random variables (though it is probable that, historically, the case of ω(n) was the first one: the computation of the limit above, but not its interpretation, is already contained in the proof of the Erdös-Kác theorem due to Rényi and Turán; nowadays, it is most easily obtained by a direct application of the Delange-Selberg method).
This other example is a reinterpretation of a famous formula due to J. Keating and N. Snaith, in the setting of Random Matrix theory: let Yn be a random variable distributed like
$latex \det(1-A)$
where A is a random unitary matrix distributed according to Haar measure on the compact unitary group U(n) of size n; then, it follows from the work of Keating-Snaith that the following formula holds for all real u:
$latex \lim_{n\rightarrow +\infty}\ \ \exp(u^2\log n) \mathbf{E}(\exp(iu\log |Y_n|^2))=\frac{G(1+iu)^2}{G(1+2iu)},$
where G is the Barnes (double gamma) function. Here, the first factor on the left-hand side is the inverse of the characteristic function of a normal random variable with mean 0 and variance 2 log n, and so we speak of mod-Gaussian convergence. The variance is increasing, so the heuristic idea is that Yn is distributed like a Gaussian, which is more and more widely spread out, plus some independent converging “core”.
There are by now many proofs of this formula, often stated in terms of the Mellin transform instead of the characteristic function, and again without the interpretation we give; see for instance the probabilistic proof of Bourgade, Hughes, Nikeghbali and Yor.
It is not yet entirely clear what is the meaning and generality of the type of limiting phenomenon found in those two examples. One can’t help noticing (and being encouraged by) the richness of the environment where it is found: in addition to the Random Matrix setting, the limiting function in the Keating and Snaith formula is one of the conjectural terms for moments of the Riemann zeta function on the critical line. In our preprint, we also prove quite general versions involving the equally rich and classical probabilistic setting of infinitely divisible distribution: I won’t say more because for this part I am much less competent than my co-authors.
We feel that there is a lot to be done to understand and explain these limit theorems, and hopefully that this form of convergence can also be directly useful to derive new results, in particular in arithmetic.