Papers
Topics
Authors
Recent
Search
2000 character limit reached

Randomized Learning of the Second-Moment Matrix of a Smooth Function

Published 19 Dec 2016 in cs.IT and math.IT | (1612.06339v6)

Abstract: Consider an open set $\mathbb{D}\subseteq\mathbb{R}n$, equipped with a probability measure $\mu$. An important characteristic of a smooth function $f:\mathbb{D}\rightarrow\mathbb{R}$ is its \emph{second-moment matrix} $\Sigma_{\mu}:=\int \nabla f(x) \nabla f(x)* \mu(dx) \in\mathbb{R}{n\times n}$, where $\nabla f(x)\in\mathbb{R}n$ is the gradient of $f(\cdot)$ at $x\in\mathbb{D}$ and $*$ stands for transpose. For instance, the span of the leading $r$ eigenvectors of $\Sigma_{\mu}$ forms an \emph{active subspace} of $f(\cdot)$, which contains the directions along which $f(\cdot)$ changes the most and is of particular interest in \emph{ridge approximation}. In this work, we propose a simple algorithm for estimating $\Sigma_{\mu}$ from random point evaluations of $f(\cdot)$ \emph{without} imposing any structural assumptions on $\Sigma_{\mu}$. Theoretical guarantees for this algorithm are established with the aid of the same technical tools that have proved valuable in the context of covariance matrix estimation from partial measurements.

Citations (3)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.