RLCT-Aware Correction for Singular Bayesian Models
- The paper introduces RLCT-aware correction that replaces the classical d/2 penalty with an RLCT-based term, ensuring asymptotically unbiased evidence estimation in singular models.
- The approach leverages algebraic geometry to compute the effective model dimension, yielding corrections that are invariant under reparameterizations.
- Empirical validations in linear-Gaussian rank and subspace models demonstrate that the RLCT correction eliminates the systematic over-penalization observed with traditional Laplace approximations.
RLCT-aware correction is a principled modification to Bayesian model selection criteria in the context of singular models, particularly those with overparameterization or rank-deficiency. Standard techniques like the Laplace approximation and Bayesian Information Criterion (BIC) apply a penalty based on the ambient parameter count, which leads to systematic errors in evidence evaluation when the effective model complexity is strictly lower. The RLCT-aware correction replaces the classical penalty with a term involving the real log canonical threshold (RLCT), yielding an evidence estimate that accurately tracks the true marginal likelihood asymptotics, achieves invariance under reparameterization, and rectifies the asymptotic drift observed under traditional approximations (Rao, 3 Jan 2026).
1. Real Log Canonical Threshold and Effective Dimension
In regular parametric models where the Fisher information matrix is full rank and of dimension , the Laplace approximation and BIC expansion for the marginal likelihood take the form: The BIC or Laplace penalty is thus , treating as the number of effective 'directions' or parameters. Singular learning theory, however, demonstrates that for singular models—such as low-rank or overparameterized linear-Gaussian regression—the correct dimension is not , but rather the RLCT , a rational number quantifying the curvature of the likelihood near a Kullback-Leibler minimizer .
For these models, the precise asymptotic expansion for the marginal likelihood becomes: where is the RLCT and is a multiplicity factor ( in simple linear settings). In regular models, ; in singular models, typically , signifying that only directions induce a penalty each.
2. Limitations of the Laplace Approximation and BIC in Singular Models
The Laplace approximation and BIC, under the assumption of regularity, prescribe the penalty . When applied to singular models, they impose an excessive penalty, leading to an error in the estimated marginal likelihood given by: This excess penalty manifests as a drift in the BIC score that grows linearly in when , causing systematic over-penalization and divergence from the true marginal likelihood asymptotics as sample size increases.
3. RLCT-Aware Correction: Formulation and Properties
The RLCT-aware correction directly amends the penalty term in the evidence estimate. The RLCT-corrected log-evidence is defined as: Unlike the classical Laplace/BIC formula, this correction exactly cancels the leading asymptotic slope when compared to the true expansion: ensuring that the RLCT error remains bounded as grows. In effect, the RLCT-aware correction yields an evidence estimate whose leading asymptotics align precisely with the true marginal likelihood, eliminating the asymptotic drift.
4. Invariance under Reparameterization
A robust feature of the RLCT penalty is its invariance under reparameterization. The RLCT, grounded in algebraic geometry, is a birational invariant: it depends solely on the intrinsic structure of the model family and not on how it is parametrized. For instance, in Gaussian dictionary (subspace) models, both minimal and overcomplete representations of the same -dimensional subspace (e.g., versus , ) share the same RLCT , and their RLCT-corrected evidences agree up to :
By contrast, the BIC-approximation would use for and for , favoring the smaller ambient dimension despite both parametrizations defining the same model family.
5. Empirical Validation in Linear-Gaussian Rank and Dictionary Models
Closed-form analytic marginal likelihoods can be derived for linear-Gaussian rank and subspace models. For rank- regression with design matrix and a Gaussian prior , the marginal log-likelihood is computable as: Empirical studies with both rank regression and subspace models follow these steps:
- Generate synthetic data for increasing sample sizes.
- Compute the exact log-evidence and two approximations (Laplace and RLCT-corrected).
- Calculate the residual errors and .
- Estimate their slopes via linear regression on .
The results demonstrate:
- In singular (rank-deficient) regression (), the BIC error slope is empirically , matching the theoretical prediction.
- The RLCT error slope remains near zero independent of the rank.
- In regular (full-rank) regression (), both error slopes vanish.
- In subspace models, different parametrizations of the same subspace yield log-evidence differences bounded by under RLCT-correction, while BIC penalizes the overcomplete representation excessively.
6. Implications and Practical Significance
The analytic and empirical results establish the necessity of replacing the conventional penalty with whenever the model exhibits singularities. The RLCT correction ensures that evidence estimation:
- Remains asymptotically unbiased in singular models;
- Reflects the effective model dimension, not the nominal parameter count;
- Is invariant to overcomplete reparameterizations that preserve the intrinsic model.
This analysis holds in settings where the marginal likelihood is tractable, but the conceptual framework is extensible to broader singular learning-theoretic contexts. A plausible implication is the need for RLCT-aware evidence criteria in any Bayesian model class with potential singularities or identifiability defects (Rao, 3 Jan 2026).