2000 character limit reached
Information Theoretic Limits for Linear Prediction with Graph-Structured Sparsity
Published 26 Jan 2017 in cs.LG, cs.IT, math.IT, and stat.ML | (1701.07895v2)
Abstract: We analyze the necessary number of samples for sparse vector recovery in a noisy linear prediction setup. This model includes problems such as linear regression and classification. We focus on structured graph models. In particular, we prove that sufficient number of samples for the weighted graph model proposed by Hegde and others is also necessary. We use the Fano's inequality on well constructed ensembles as our main tool in establishing information theoretic lower bounds.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.