2000 character limit reached
On Hypothesis Testing via a Tunable Loss
Published 28 Aug 2022 in cs.IT and math.IT | (2208.13152v1)
Abstract: We consider a problem of simple hypothesis testing using a randomized test via a tunable loss function proposed by Liao \textit{et al}. In this problem, we derive results that correspond to the Neyman--Pearson lemma, the Chernoff--Stein lemma, and the Chernoff-information in the classical hypothesis testing problem. Specifically, we prove that the optimal error exponent of our problem in the Neyman--Pearson's setting is consistent with the classical result. Moreover, we provide lower bounds of the optimal Bayesian error exponent.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.