Papers
Topics
Authors
Recent
Search
2000 character limit reached

Measuring and regularizing networks in function space

Published 21 May 2018 in cs.NE, cs.LG, and stat.ML | (1805.08289v3)

Abstract: To optimize a neural network one often thinks of optimizing its parameters, but it is ultimately a matter of optimizing the function that maps inputs to outputs. Since a change in the parameters might serve as a poor proxy for the change in the function, it is of some concern that primacy is given to parameters but that the correspondence has not been tested. Here, we show that it is simple and computationally feasible to calculate distances between functions in a $L2$ Hilbert space. We examine how typical networks behave in this space, and compare how parameter $\ell2$ distances compare to function $L2$ distances between various points of an optimization trajectory. We find that the two distances are nontrivially related. In particular, the $L2/\ell2$ ratio decreases throughout optimization, reaching a steady value around when test error plateaus. We then investigate how the $L2$ distance could be applied directly to optimization. We first propose that in multitask learning, one can avoid catastrophic forgetting by directly limiting how much the input/output function changes between tasks. Secondly, we propose a new learning rule that constrains the distance a network can travel through $L2$-space in any one update. This allows new examples to be learned in a way that minimally interferes with what has previously been learned. These applications demonstrate how one can measure and regularize function distances directly, without relying on parameters or local approximations like loss curvature.

Citations (126)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.