Generalizing the Fano inequality further
Abstract: Interactive statistical decision making (ISDM) features algorithm-dependent data generated through interaction. Existing information-theoretic lower bounds in ISDM largely target expected risk, while tail-sensitive objectives are less developed. We generalize the interactive Fano framework of Chen et al. by replacing the hard success event with a randomized one-bit statistic representing an arbitrary bounded transform of the loss. This yields a Bernoulli f-divergence inequality, which we invert to obtain a two-sided interval for the transform, recovering the previous result as a special case. Instantiating the transform with a bounded hinge and using the Rockafellar-Uryasev representation, we derive lower bounds on the prior-predictive (Bayesian) CVaR of bounded losses. For KL divergence with the mixture reference distribution, the bound becomes explicit in terms of mutual information via Pinsker's inequality.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.