Papers
Topics
Authors
Recent
Search
2000 character limit reached

Regularized Gradient Descent: A Nonconvex Recipe for Fast Joint Blind Deconvolution and Demixing

Published 25 Mar 2017 in cs.IT and math.IT | (1703.08642v2)

Abstract: We study the question of extracting a sequence of functions ${\boldsymbol{f}i, \boldsymbol{g}_i}{i=1}s$ from observing only the sum of their convolutions, i.e., from $\boldsymbol{y} = \sum_{i=1}s \boldsymbol{f}_i\ast \boldsymbol{g}_i$. While convex optimization techniques are able to solve this joint blind deconvolution-demixing problem provably and robustly under certain conditions, for medium-size or large-size problems we need computationally faster methods without sacrificing the benefits of mathematical rigor that come with convex methods. In this paper, we present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. Our two-step algorithm converges to the global minimum linearly and is also robust in the presence of additive noise. While the derived performance bounds are suboptimal in terms of the information-theoretic limit, numerical simulations show remarkable performance even if the number of measurements is close to the number of degrees of freedom. We discuss an application of the proposed framework in wireless communications in connection with the Internet-of-Things.

Citations (50)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.