Papers
Topics
Authors
Recent
Search
2000 character limit reached

Robust Sparse Recovery with Sparse Bernoulli matrices via Expanders

Published 28 Dec 2021 in cs.IT, math.IT, math.PR, math.ST, and stat.TH | (2112.14148v3)

Abstract: Sparse binary matrices are of great interest in the field of sparse recovery, nonnegative compressed sensing, statistics in networks, and theoretical computer science. This class of matrices makes it possible to perform signal recovery with lower storage costs and faster decoding algorithms. In particular, Bernoulli$(p)$ matrices formed by independent identically distributed (i.i.d.) Bernoulli$(p)$ random variables are of practical relevance in the context of noise-blind recovery in nonnegative compressed sensing. In this work, we investigate the robust nullspace property of Bernoulli$(p)$ matrices. Previous results in the literature establish that such matrices can accurately recover $n$-dimensional $s$-sparse vectors with $m=O\left(\frac{s}{c(p)}\log\frac{en}{s}\right)$ measurements, where $c(p) \le p$ is a constant dependent only on the parameter $p$. These results suggest that in the sparse regime, as $p$ approaches zero, the (sparse) Bernoulli$(p)$ matrix requires significantly more measurements than the minimal necessary, as achieved by standard isotropic subgaussian designs. However, we show that this is not the case. Our main result characterizes, for a wide range of sparsity levels $s$, the smallest $p$ for which sparse recovery can be achieved with the minimal number of measurements. We also provide matching lower bounds to establish the optimality of our results and explore connections with the theory of invertibility of discrete random matrices and integer compressed sensing.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.