Papers
Topics
Authors
Recent
Search
2000 character limit reached

Binary autoencoder with random binary weights

Published 30 Apr 2020 in cs.LG and stat.ML | (2004.14717v1)

Abstract: Here is presented an analysis of an autoencoder with binary activations ${0, 1}$ and binary ${0, 1}$ random weights. Such set up puts this model at the intersection of different fields: neuroscience, information theory, sparse coding, and machine learning. It is shown that the sparse activation of the hidden layer arises naturally in order to preserve information between layers. Furthermore, with a large enough hidden layer, it is possible to get zero reconstruction error for any input just by varying the thresholds of neurons. The model preserves the similarity of inputs at the hidden layer that is maximal for the dense hidden layer activation. By analyzing the mutual information between layers it is shown that the difference between sparse and dense representations is related to a memory-computation trade-off. The model is similar to an olfactory perception system of a fruit fly, and the presented theoretical results give useful insights toward understanding more complex neural networks.

Citations (3)

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.