Weighted Chernoff information and optimal loss exponent in context-sensitive hypothesis testing
Abstract: We consider context-sensitive (binary) hypothesis testing for i.i.d. observations under a multiplicative weight function. We establish the logarithmic asymptotic, as the sample size grows, of the optimal total loss (sum of type-I and type-II losses) and express the corresponding error exponent through a weighted Chernoff information between the competing distributions. Our approach embeds weighted geometric mixtures into an exponential family and identifies the exponent as the maximizer of its log-normaliser. We also provide concentration bounds for a tilted weighted log-likelihood and derive explicit expressions for Gaussian and Poisson models, as well as further parametric examples.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.