Papers
Topics
Authors
Recent
Search
2000 character limit reached

Bourgade's Central Limit Theorem

Updated 14 January 2026
  • Bourgade's Central Limit Theorem is an extension of Selberg’s classical result, establishing multivariate Gaussian limit laws for log-shifted Dirichlet L-functions.
  • It employs analytic techniques such as truncated Euler products and zero-density estimates to derive quantitative convergence rates in the Dudley metric.
  • The theorem reveals that arithmetic correlations and logarithmic shifts critically shape the covariance structure, influencing the convergence speed of the Gaussian approximation.

Bourgade’s Central Limit Theorem refers to an advanced multivariate central limit theorem (CLT) for log values of shifted Dirichlet LL-functions, extending the classical Selberg CLT and revealing intricate dependence structures in random vectors built from analytic number theory objects. The statement and quantitative rates of convergence for Bourgade’s CLT in Dirichlet LL-functions are elucidated by Hsu–Wong (Hsu et al., 6 Jan 2026), with refined analysis based on works of Radziwiłł–Soundararajan and Roberts. The theorem establishes Gaussian limit laws for vectors assembled from logL(½+i(U+αj),χj)\log|L(½+i(U+\alpha_j),\chi_j)| for independent Dirichlet characters χj\chi_j and real shifts αj\alpha_j, demonstrating how component correlations critically affect convergence rates.

1. Model Setup and Formulation

Let N1N\geq1 be fixed. Consider primitive Dirichlet characters χ1,,χN\chi_1,\dots,\chi_N (respective moduli qjq_j), and shifts α1(T),,αN(T)\alpha_1(T),\dots,\alpha_N(T) satisfying αj(T)T/2|\alpha_j(T)|\leq T/2. For a random U=UTU=U_T uniformly distributed in [T,2T][T,2T], define the NN-vector random variable

XT(U)=(Xj,T(U))1jN,Xj,T(U)=logL(½+i(U+αj),χj)loglogT.X_T(U) = \left( X_{j,T}(U) \right)_{1\leq j \leq N}, \quad X_{j,T}(U) = \frac{\log | L(½+i(U+\alpha_j), \chi_j) |}{\sqrt{\log\log T}}.

Bourgade’s theorem (originally established in the zeta-function setting) asserts that under spacing conditions on the shifts αj\alpha_j, the vector XT(U)X_T(U) converges in distribution as TT\to\infty to a centered NN-variate Gaussian X~N(0,K)\widetilde{X}\sim N(0,K), where KK is the covariance matrix determined by the log-distance of shifts and character-twist relations:

  • For 1i<jN1\leq i<j\leq N, δi,j=1\delta_{i,j}=1 if χiχj\chi_i\overline{\chi_j} is principal (else $0$).
  • ci,j[0,1]c_{i,j}\in[0,1] is defined by matching

log(1αiαj)=ci,jloglogT+O((logloglogT)ε).\log\left(\frac{1}{|\alpha_i-\alpha_j|}\right) = c_{i,j}\log\log T + O((\log\log\log T)^\varepsilon).

  • Covariance matrix entries: ki,i=1k_{i,i}=1, ki,j=δi,jci,jk_{i,j}=\delta_{i,j}c_{i,j} for iji\neq j.

This formalizes the multivariate CLT for shifted Dirichlet LL-functions, conditional on KK being positive-definite.

2. Hypotheses and Dependence Structure

The hypothesis requires each αjT/2|\alpha_j| \leq T/2, and pairwise spacings αiαjexp(O((loglogT)ε))|\alpha_i-\alpha_j| \geq \exp(-O((\log\log T)^\varepsilon)). The matrix KK captures both the arithmetic (via δi,j\delta_{i,j}, "principal twist") and the geometric (via ci,jc_{i,j}, log-distance) dependence. Hence, only "like" characters (those sharing a primitive part) correlate; ki,jk_{i,j} measures the strength accordingly. With distinct quadratic characters, partial correlations ci,j/2c_{i,j}/2 arise. The dependence encoded in KK directly governs the joint limiting distribution and rate of convergence.

3. Rates of Convergence in Dudley Metric

Rates are quantified via the Dudley (bounded-Lipschitz) metric dD(,)d_D(\cdot,\cdot) on NN-vectors, parameterized by bounds L,ML,M on Lipschitz constants and sup-norms:

  • General dependent case (Theorem 1.3):

For any 0<ε1<ε20<\varepsilon_1<\varepsilon_2 with ε1+ε2<1\varepsilon_1+\varepsilon_2<1, for sufficiently large TT,

dD(XT,X~)K,NL(logloglogT)ε1+M(logloglogT)N(ε1+ε2)exp(12(logloglogT)ε1+ε2)d_D( X_T, \widetilde{X} ) \ll_{K,N} \frac{L}{(\log\log\log T)^{\varepsilon_1}} + M (\log\log\log T)^{N(\varepsilon_1+\varepsilon_2)} \exp\left( -\tfrac12(\log\log\log T)^{\varepsilon_1+\varepsilon_2} \right)

so dD0d_D \to 0 as TT\to\infty.

  • Independent case (Δ=0,δ=0)(\Delta=0, \delta=0) for N=1,2,3N=1,2,3 (Theorem 1.4):

    • For N=1,2N=1,2:

    dD(XT,X~)LN(logloglogT)2loglogT+M(loglogT)1εε3d_D(X_T, \widetilde{X}) \ll \frac{L\,N(\log\log\log T)^2}{ \sqrt{\log\log T} } + \frac{M}{(\log\log T)^{1-\varepsilon-\varepsilon_3}} - For N=3N=3:

    dD(XT,X~)LN(logloglogT)2loglogT+M(loglogT)12ε4d_D(X_T, \widetilde{X}) \ll \frac{L\,N(\log\log\log T)^2}{ \sqrt{\log\log T} } + \frac{M}{(\log\log T)^{\tfrac12-\varepsilon_4}}

These recover and extend Selberg’s O((loglogT)1/2)O((\log\log T)^{-1/2}) rate in the univariate case to independent multivariate settings.

4. Proof Architecture and Approximation Steps

The convergence analysis employs a seven-step approximation scheme:

  1. Express XTX_T via logL(½+i(U+αj))/loglogT\log|L(½+i(U+\alpha_j))|/\sqrt{\log\log T}.
  2. Shift s=½+i(U+αj)s=½+i(U+\alpha_j) to σ0+i(U+αj)\sigma_0+i(U+\alpha_j) with σ0=½+W/logT\sigma_0=½+W/\log T, W(logloglogT)2W\sim (\log\log\log T)^2; Dudley error O(LW/loglogT)O(LW/\sqrt{\log\log T}).
  3. Truncate Euler product to Dirichlet polynomial M(s)M(s) of controlled length To(1)T^{o(1)}, using zero-density estimates (Radziwiłł–Soundararajan) to manage error O(M(Δ+δ)/loglogT)O(M(\Delta+\delta)/\sqrt{\log\log T}).
  4. Approximate logM1\log M^{-1} by P(s)=pXχ(p)psP(s)=\sum_{p\leq X}\chi(p)p^{-s} via Mertens/log expansions.
  5. Renormalize P(s)P(s) to match the covariance structure, yielding RT1R^1_T.
  6. Apply cumulant/moment bounds (Roberts’ method) to compare characteristic functions of RT1(U)R^1_T(U) and the target Gaussian, yielding the core exponential error.
  7. Use matrix perturbation arguments (Taylor expansion and determinant comparison) to compare X~\widetilde{X} with the Gaussian.

Steps 3–4 exploit moments methods and avoid zeros of LL; step 6 generalizes Roberts’ Stein/CF technique to multivariate dependencies, with dimensional tracking.

5. Influence of Dependence on Rates

The rate in Theorem 1.3 is governed by the number of components NN and the dependence structure, as encoded by ki,j=δi,jci,jk_{i,j}=\delta_{i,j}c_{i,j}. The measure decays only doubly logarithmically, then exponentially in (logloglogT)ε(\log\log\log T)^{\varepsilon}, with the pre-factor (logloglogT)N(ε1+ε2)(\log\log\log T)^{N(\varepsilon_1+\varepsilon_2)} arising from the multivariate setting and the degree of correlation. In contrast, independence (Δ=δ=0\Delta = \delta = 0) leads to much faster polylogarithmic rates, matching Selberg’s original results in the univariate case and extending them for N=2,3N=2,3. This demonstrates that even mild logarithmic correlations among components can drastically slow multivariate convergence.

The Bourgade CLT generalizes the classical Selberg result to multivariate, correlated settings and Dirichlet LL-functions. The Hsu–Wong results elaborate this theorem with precise metric rates, fully quantifying the impact of dependence and the number of components. The reliance on recent advances by Radziwiłł–Soundararajan permits effective control of the error via zero-density estimates, while Roberts’ techniques enable detailed moment/cumulant bounds in dependent regimes. A plausible implication is that the structure of arithmetic and geometric correlations in high-dimensional vector-valued analytic number theory profoundly influences the practical speed of Gaussian approximation in central limit phenomena (Hsu et al., 6 Jan 2026).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (1)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Bourgade's Central Limit Theorem.