Rethinking Generalisation
Abstract: In this paper, a new approach to computing the generalisation performance is presented that assumes the distribution of risks, $\rho(r)$, for a learning scenario is known. From this, the expected error of a learning machine using empirical risk minimisation is computed for both classification and regression problems. A critical quantity in determining the generalisation performance is the power-law behaviour of $\rho(r)$ around its minimum value---a quantity we call attunement. The distribution $\rho(r)$ is computed for the case of all Boolean functions and for the perceptron used in two different problem settings. Initially a simplified analysis is presented where an independence assumption about the losses is made. A more accurate analysis is carried out taking into account chance correlations in the training set. This leads to corrections in the typical behaviour that is observed.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.