Papers
Topics
Authors
Recent
Search
2000 character limit reached

Quantification of the Leakage in Federated Learning

Published 12 Oct 2019 in cs.CR | (1910.05467v2)

Abstract: With the growing emphasis on users' privacy, federated learning has become more and more popular. Many architectures have been raised for a better security. Most architecture work on the assumption that data's gradient could not leak information. However, some work, recently, has shown such gradients may lead to leakage of the training data. In this paper, we discuss the leakage based on a federated approximated logistic regression model and show that such gradient's leakage could leak the complete training data if all elements of the inputs are either 0 or 1.

Citations (21)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.