Selecting the best majorizer for SBL

Determine a principled method to select or construct the optimal majorizer of the Sparse Bayesian Learning (SBL) negative log marginal likelihood objective for a given sparse recovery problem and specified performance metric, rather than relying on ad hoc or purely data-driven choices.

Background

The paper shows that several popular SBL update rules can be derived within the majorization–minimization (MM) framework and introduces a new class of majorizers (p-SBL) that unify EM-SBL and MU-SBL. Different majorizers yield different algorithms with distinct convergence and performance characteristics.

Despite this progress, the authors note that there is no clear principle for choosing the "best" majorizer for a given problem. They propose strategies such as generating new majorizers and learning convex combinations from data, but emphasize that a principled selection method remains unclear.

References

However, it is unclear how to find the "best" majorizer for a given problem.

Sparse Bayesian Learning Algorithms Revisited: From Learning Majorizers to Structured Algorithmic Learning using Neural Networks  (2604.02513 - Balaji et al., 2 Apr 2026) in Section 4, Learning the Majorizer via Data (first paragraph)