Repulsive g-Priors for Regression Mixtures
Abstract: Mixture regression models are powerful tools for capturing heterogeneous covariate-response relationships, yet classical finite mixtures and Bayesian nonparametric alternatives often suffer from instability or overestimation of clusters when component separability is weak. Recent repulsive priors improve parsimony in density mixtures by discouraging nearby components, but their direct extension to regression is nontrivial since separation must respect the predictive geometry induced by covariates. We propose a repulsive g-prior for regression mixtures that enforces separation in the Mahalanobis metric, penalizing components indistinguishable in the predictive mean space. This construction preserves conjugacy-like updates while introducing geometry-aware interactions, enabling efficient blocked-collapsed Gibbs sampling. Theoretically, we establish tractable normalizing bounds, posterior contraction rates, and shrinkage of tail mass on the number of components. Simulations under correlated and overlapping designs demonstrate improved clustering and prediction relative to independent, Euclidean-repulsive, and sparsity-inducing baselines.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.