Papers
Topics
Authors
Recent
Search
2000 character limit reached

Byzantine-Robust Federated Learning: An Overview With Focus on Developing Sybil-based Attacks to Backdoor Augmented Secure Aggregation Protocols

Published 30 Oct 2024 in cs.LG and cs.CR | (2410.22680v1)

Abstract: Federated Learning (FL) paradigms enable large numbers of clients to collaboratively train Machine Learning models on private data. However, due to their multi-party nature, traditional FL schemes are left vulnerable to Byzantine attacks that attempt to hurt model performance by injecting malicious backdoors. A wide variety of prevention methods have been proposed to protect frameworks from such attacks. This paper provides a exhaustive and updated taxonomy of existing methods and frameworks, before zooming in and conducting an in-depth analysis of the strengths and weaknesses of the Robustness of Federated Learning (RoFL) protocol. From there, we propose two novel Sybil-based attacks that take advantage of vulnerabilities in RoFL. Finally, we conclude with comprehensive proposals for future testing, describe and detail implementation of the proposed attacks, and offer direction for improvements in the RoFL protocol as well as Byzantine-robust frameworks as a whole.

Authors (1)
Definition Search Book Streamline Icon: https://streamlinehq.com
References (42)
  1. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, A. Singh and J. Zhu, Eds., vol. 54.   PMLR, 20–22 Apr 2017, pp. 1273–1282. [Online]. Available: https://proceedings.mlr.press/v54/mcmahan17a.html
  2. H. Lycklama, L. Burkhalter, A. Viand, N. Küchler, and A. Hithnawi, “Rofl: Robustness of secure federated learning,” in 2023 IEEE Symposium on Security and Privacy (SP).   IEEE, 2023, pp. 453–476.
  3. H. Xing, O. Simeone, and S. Bi, “Decentralized federated learning via sgd over wireless d2d networks,” in 2020 IEEE 21st International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), 2020, pp. 1–5.
  4. R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” 2017. [Online]. Available: https://arxiv.org/abs/1610.05820
  5. N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. Brown, D. Song, U. Erlingsson, A. Oprea, and C. Raffel, “Extracting training data from large language models,” 2021. [Online]. Available: https://arxiv.org/abs/2012.07805
  6. P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” in Advances in Neural Information Processing Systems, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds., vol. 30.   Curran Associates, Inc., 2017. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/file/f4b9ec30ad9f68f89b29639786cb62ef-Paper.pdf
  7. L. Lamport, R. Shostak, and M. Pease, “The byzantine generals problem,” ACM Trans. Program. Lang. Syst., vol. 4, no. 3, p. 382–401, jul 1982. [Online]. Available: https://doi.org/10.1145/357172.357176
  8. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, S. Chiappa and R. Calandra, Eds., vol. 108.   PMLR, 26–28 Aug 2020, pp. 2938–2948. [Online]. Available: https://proceedings.mlr.press/v108/bagdasaryan20a.html
  9. S. Hu, J. Lu, W. Wan, and L. Zhang, “Challenges and approaches for mitigating byzantine attacks in federated learning,” 12 2021.
  10. D. Cao, S. Chang, Z. Lin, G. Liu, and D. Sun, “Understanding distributed poisoning attack in federated learning,” 2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS), pp. 233–239, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:210992177
  11. C. Fung, C. J. M. Yoon, and I. Beschastnikh, “Mitigating sybils in federated learning poisoning,” CoRR, vol. abs/1808.04866, 2018. [Online]. Available: http://arxiv.org/abs/1808.04866
  12. M. Li, W. Wan, J. Lu, S. Hu, J. Shi, L. Zhang, M. Zhou, and Y. Zheng, “Shielding federated learning: Mitigating byzantine attacks with less constraints,” pp. 178–185, 12 2022.
  13. W. Wan, S. Hu, j. Lu, L. Y. Zhang, H. Jin, and Y. He, “Shielding federated learning: Robust aggregation with adaptive client selection,” in Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, ser. IJCAI-2022.   International Joint Conferences on Artificial Intelligence Organization, Jul. 2022, p. 753–760. [Online]. Available: http://dx.doi.org/10.24963/ijcai.2022/106
  14. S. Li, Y. Cheng, Y. Liu, W. Wang, and T. Chen, “Abnormal client behavior detection in federated learning,” 10 2019.
  15. C. Xie, S. Koyejo, and I. Gupta, “Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 6893–6901. [Online]. Available: https://proceedings.mlr.press/v97/xie19b.html
  16. X. Cao and L. Lai, “Distributed gradient descent algorithm robust to an arbitrary number of byzantine attackers,” IEEE Transactions on Signal Processing, vol. 67, no. 22, pp. 5850–5864, 2019.
  17. X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” 2022. [Online]. Available: https://arxiv.org/abs/2012.13995
  18. L. Muñoz-González, K. T. Co, and E. C. Lupu, “Byzantine-robust federated machine learning through adaptive model averaging,” 2019. [Online]. Available: https://arxiv.org/abs/1909.05125
  19. C. Xie, O. Koyejo, and I. Gupta, “Generalized byzantine-tolerant sgd,” 2018. [Online]. Available: https://arxiv.org/abs/1802.10116
  20. D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” 2021. [Online]. Available: https://arxiv.org/abs/1803.01498
  21. E. M. E. Mhamdi, R. Guerraoui, and S. Rouault, “The hidden vulnerability of distributed learning in byzantium,” 2018. [Online]. Available: https://arxiv.org/abs/1802.07927
  22. J. Shi, W. Wan, S. Hu, J. Lu, and L. Y. Zhang, “Challenges and approaches for mitigating byzantine attacks in federated learning,” 2022. [Online]. Available: https://arxiv.org/abs/2112.14468
  23. C. Xie, S. Koyejo, and I. Gupta, “Slsgd: Secure and efficient distributed on-device machine learning,” 2019. [Online]. Available: https://arxiv.org/abs/1903.06996
  24. K. Pillutla, S. M. Kakade, and Z. Harchaoui, “Robust aggregation for federated learning,” IEEE Transactions on Signal Processing, vol. 70, p. 1142–1154, 2022. [Online]. Available: http://dx.doi.org/10.1109/TSP.2022.3153135
  25. L. Li, W. Xu, T. Chen, G. B. Giannakis, and Q. Ling, “Rsa: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets,” 2019. [Online]. Available: https://arxiv.org/abs/1811.03761
  26. Z. Xing, Z. Zhang, Z. Zhang, J. Liu, L. Zhu, and G. Russello, “No vandalism: Privacy-preserving and byzantine-robust federated learning,” 2024. [Online]. Available: https://arxiv.org/abs/2406.01080
  27. M. Zaher and H. Aly, “Federated resistance against adversarial attacks in resource-constrained iot,” Journal of Intelligent Systems and Internet of Things, vol. 6, pp. 58–76, 01 2022.
  28. L. Chen, H. Wang, Z. Charles, and D. Papailiopoulos, “Draco: Byzantine-resilient distributed training via redundant gradients,” 2018. [Online]. Available: https://arxiv.org/abs/1803.09877
  29. S. Shen, S. Tople, and P. Saxena, “Auror: defending against poisoning attacks in collaborative deep learning systems,” in Proceedings of the 32nd Annual Conference on Computer Security Applications, ser. ACSAC ’16.   New York, NY, USA: Association for Computing Machinery, 2016, p. 508–519. [Online]. Available: https://doi.org/10.1145/2991079.2991125
  30. C. Nie, Q. Li, Y. Yang, Y. Ji, and B. Wang, “Efficient byzantine-robust and provably privacy-preserving federated learning,” 2024. [Online]. Available: https://arxiv.org/abs/2407.19703
  31. A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” in Proceedings of the 36th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97.   PMLR, 09–15 Jun 2019, pp. 634–643. [Online]. Available: https://proceedings.mlr.press/v97/bhagoji19a.html
  32. H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J. yong Sohn, K. Lee, and D. Papailiopoulos, “Attack of the tails: Yes, you really can backdoor federated learning,” 2020. [Online]. Available: https://arxiv.org/abs/2007.05084
  33. H. Wang, K. Sreenivasan, S. Rajput, H. Vishwakarma, S. Agarwal, J.-y. Sohn, K. Lee, and D. Papailiopoulos, “Attack of the tails: Yes, you really can backdoor federated learning,” in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33.   Curran Associates, Inc., 2020, pp. 16 070–16 084. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2020/file/b8ffa41d4e492f0fad2f13e29e1762eb-Paper.pdf
  34. K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for privacy-preserving machine learning,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, ser. CCS ’17.   New York, NY, USA: Association for Computing Machinery, 2017, p. 1175–1191. [Online]. Available: https://doi.org/10.1145/3133956.3133982
  35. Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really backdoor federated learning?” ArXiv, vol. abs/1911.07963, 2019. [Online]. Available: https://api.semanticscholar.org/CorpusID:208157929
  36. J. Bell, K. A. Bonawitz, A. Gascón, T. Lepoint, and M. Raykova, “Secure single-server aggregation with (poly)logarithmic overhead,” Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, 2020. [Online]. Available: https://api.semanticscholar.org/CorpusID:219602752
  37. B. Bünz, J. Bootle, D. Boneh, A. Poelstra, P. Wuille, and G. Maxwell, “Bulletproofs: Short proofs for confidential transactions and more,” in 2018 IEEE Symposium on Security and Privacy (SP), 2018, pp. 315–334.
  38. T. P. Pedersen, “Non-interactive and information-theoretic secure verifiable secret sharing,” in Advances in Cryptology — CRYPTO ’91, J. Feigenbaum, Ed.   Berlin, Heidelberg: Springer Berlin Heidelberg, 1992, pp. 129–140.
  39. X. Xiao, Z. Tang, C. Li, B. Xiao, and K. Li, “Sca: Sybil-based collusion attacks of iiot data poisoning in federated learning,” IEEE Transactions on Industrial Informatics, vol. 19, no. 3, pp. 2608–2618, 2023.
  40. M. S. Jere, T. Farnan, and F. Koushanfar, “A taxonomy of attacks on federated learning,” IEEE Security & Privacy, vol. 19, no. 2, pp. 20–28, 2021.
  41. V. Pihur, A. Korolova, F. Liu, S. Sankuratripati, M. Yung, D. Huang, and R. Zeng, “Differentially-private “draw and discard” machine learning: Training distributed model from enormous crowds,” in Cyber Security, Cryptology, and Machine Learning: 6th International Symposium, CSCML 2022, Be’er Sheva, Israel, June 30 – July 1, 2022, Proceedings.   Berlin, Heidelberg: Springer-Verlag, 2022, p. 468–486. [Online]. Available: https://doi.org/10.1007/978-3-031-07689-3_33
  42. D. Froelicher, J. R. Troncoso-Pastoriza, J. Sousa, and J.-P. Hubaux, “Drynx: Decentralized, secure, verifiable system for statistical queries and machine learning on distributed datasets,” 02 2019.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.