Papers
Topics
Authors
Recent
Search
2000 character limit reached

Empirical analysis of non-linear activation functions for Deep Neural Networks in classification tasks

Published 30 Oct 2017 in cs.LG and stat.ML | (1710.11272v1)

Abstract: We provide an overview of several non-linear activation functions in a neural network architecture that have proven successful in many machine learning applications. We conduct an empirical analysis on the effectiveness of using these function on the MNIST classification task, with the aim of clarifying which functions produce the best results overall. Based on this first set of results, we examine the effects of building deeper architectures with an increasing number of hidden layers. We also survey the impact of using, on the same task, different initialisation schemes for the weights of our neural network. Using these sets of experiments as a base, we conclude by providing a optimal neural network architecture that yields impressive results in accuracy on the MNIST classification task.

Citations (16)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.