2000 character limit reached
Non-Bayesian Learning in Misspecified Models
Published 23 Mar 2025 in econ.TH, math.ST, and stat.TH | (2503.18024v2)
Abstract: Deviations from Bayesian updating are traditionally categorized as biases, errors, or fallacies, thus implying their inherent ``sub-optimality.'' We offer a more nuanced view. We demonstrate that, in learning problems with misspecified models, non-Bayesian updating can outperform Bayesian updating.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.