What's in a ball? Constructing and characterizing uncertainty sets
Abstract: In the presence of model risk, it is well-established to replace classical expected values by worst-case expectations over all models within a fixed radius from a given reference model. This is the "robustness" approach. We show that previous methods for measuring this radius, e.g. relative entropy or polynomial divergences, are inadequate for reference models which are moderately heavy-tailed such as lognormal models. Worst cases are either infinitely pessimistic, or they rule out the possibility of fat-tailed "power law" models as plausible alternatives. We introduce a new family of divergence measures which captures intermediate levels of pessimism.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.