Papers
Topics
Authors
Recent
Search
2000 character limit reached

Negative Feedback System as Optimizer for Machine Learning Systems

Published 25 Mar 2021 in cs.LG and cs.AI | (2103.14115v2)

Abstract: With high forward gain, a negative feedback system has the ability to perform the inverse of a linear or non-linear function that is in the feedback path. This property of negative feedback systems has been widely used in analog electronic circuits to construct precise closed-loop functions. This paper describes how the function-inverting process of a negative feedback system serves as a physical analogy of the optimization technique in machine learning. We show that this process is able to learn some non-differentiable functions in cases where a gradient descent-based method fails. We also show that the optimization process reduces to gradient descent under the constraint of squared error minimization. We derive the backpropagation technique and other known optimization techniques of deep networks from the properties of negative feedback system independently of the gradient descent method. This analysis provides a novel view of neural network optimization and may provide new insights on open problems.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.