Papers
Topics
Authors
Recent
Search
2000 character limit reached

Compressed Dictionary Learning

Published 2 May 2018 in stat.ML and cs.LG | (1805.00692v2)

Abstract: In this paper we show that the computational complexity of the Iterative Thresholding and K-residual-Means (ITKrM) algorithm for dictionary learning can be significantly reduced by using dimensionality-reduction techniques based on the Johnson-Lindenstrauss lemma. The dimensionality reduction is efficiently carried out with the fast Fourier transform. We introduce the Iterative compressed-Thresholding and K-Means (IcTKM) algorithm for fast dictionary learning and study its convergence properties. We show that IcTKM can locally recover an incoherent, overcomplete generating dictionary of $K$ atoms from training signals of sparsity level $S$ with high probability. Fast dictionary learning is achieved by embedding the training data and the dictionary into $m < d$ dimensions, and recovery is shown to be locally stable with an embedding dimension which scales as low as $m = O(S \log4 S \log3 K)$. The compression effectively shatters the data dimension bottleneck in the computational cost of ITKrM, reducing it by a factor $O(m/d)$. Our theoretical results are complemented with numerical simulations which demonstrate that IcTKM is a powerful, low-cost algorithm for learning dictionaries from high-dimensional data sets.

Citations (4)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.