Papers
Topics
Authors
Recent
Search
2000 character limit reached

Analysis of Regularized Learning in Banach Spaces for Linear-functional Data

Published 7 Sep 2021 in cs.LG, cs.NA, math.FA, math.NA, and math.OC | (2109.03159v7)

Abstract: This article delves into the study of the theory of regularized learning in Banach spaces for linear-functional data. It encompasses discussions on representer theorems, pseudo-approximation theorems, and convergence theorems. Regularized learning is designed to minimize regularized empirical risks over a Banach space. The empirical risks are calculated by utilizing training data and multi-loss functions. The input training data are composed of linear functionals in a predual space of the Banach space to capture discrete local information from multimodal data and multiscale models. Through the regularized learning, approximations of the exact solution to an unidentified or uncertain original problem are globally achieved. In the convergence theorems, the convergence of the approximate solutions to the exact solution is established through the utilization of the weak* topology of the Banach space. The theorems of regularized learning are utilized in the interpretation of classical machine learning, such as support vector machines and artificial neural networks.

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (1)

  1. Qi Ye 

Collections

Sign up for free to add this paper to one or more collections.