Summary of A Survey on Secure Decentralized Optimization and Learning, by Changxin Liu et al.
A survey on secure decentralized optimization and learning
by Changxin Liu, Nicola Bastianello, Wei Huo, Yang Shi, Karl H. Johansson
First submitted to arxiv on: 16 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Decentralized optimization has become a crucial paradigm for large-scale decision-making and training machine learning models without centralizing data. However, this approach introduces privacy and security risks, as malicious agents can infer private data or impair model accuracy. This survey provides a comprehensive overview of advancements in secure decentralized optimization and learning frameworks and algorithms. It begins by discussing the fundamentals of decentralized optimization and learning, highlighting centralized aggregation and distributed consensus as key modules vulnerable to security risks in federated and distributed optimization. The survey then focuses on privacy-preserving algorithms, detailing three cryptographic tools and their integration into decentralized optimization and learning systems. Additionally, it examines resilient algorithms, exploring the design and analysis of resilient aggregation and consensus protocols that support these systems. Finally, it discusses current trends and potential future directions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a way to solve big problems and train big computers without having all the data in one place. This approach helps keep information private and secure, but also introduces new risks of bad actors trying to steal or manipulate the data. The authors give a detailed overview of how this works and what we’ve learned so far. They start by explaining the basics of decentralized optimization and learning, then move on to how to keep things safe and private. They also look at ways to make sure the system stays reliable and strong against attacks. |
Keywords
» Artificial intelligence » Machine learning » Optimization