Papers
Topics
Authors
Recent
Search
2000 character limit reached

Meta Learning as Bayes Risk Minimization

Published 2 Jun 2020 in stat.ML and cs.LG | (2006.01488v1)

Abstract: Meta-Learning is a family of methods that use a set of interrelated tasks to learn a model that can quickly learn a new query task from a possibly small contextual dataset. In this study, we use a probabilistic framework to formalize what it means for two tasks to be related and reframe the meta-learning problem into the problem of Bayesian risk minimization (BRM). In our formulation, the BRM optimal solution is given by the predictive distribution computed from the posterior distribution of the task-specific latent variable conditioned on the contextual dataset, and this justifies the philosophy of Neural Process. However, the posterior distribution in Neural Process violates the way the posterior distribution changes with the contextual dataset. To address this problem, we present a novel Gaussian approximation for the posterior distribution that generalizes the posterior of the linear Gaussian model. Unlike that of the Neural Process, our approximation of the posterior distributions converges to the maximum likelihood estimate with the same rate as the true posterior distribution. We also demonstrate the competitiveness of our approach on benchmark datasets.

Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.