Federated Learning

Responsable / In charge of : Chuan Xu (Chuan.XU@univ-cotedazur.fr)

Résumé / Abstract :

 In this course, students will explore federated learning and its applications, focusing on how devices such as mobiles and IoT devices can collaboratively learn a machine learning model while maintaining their data locally. Additionally, the course covers methods for enhancing user privacy in federated learning and safeguarding the system against malicious devices.

Prérequis / Prerequisite :

  • Background in machine learning 
  • Python programming
 

Objectifs / Objectives :

  • Understand the application scenarios of federated learning
  • Understand how the learning algorithms works and its guarantees.
  • Understand how to protect the user's privacy and ensure the security of the system.
  • Develop the federated learning framework with help of Flower package and python
 

Contenu / Contents :

  • Federated learning frameworks
  • Learning algorithms (FedAvg, FedProx)
  • Privacy attack and protection (Differential privacy)
  • Robustness of the system (Byzantine resilient algorithms)
 

Références / References :

  • McMahan et al., Communication-Efficient Learning of Deep Networks from Decentralized Data, AISTATS 2017
  • Li et al., Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, p.p. 50-60, 2020
  • Hard, Andrew et al., Federated Learning for Mobile Keyboard Prediction. arxiv: 1811.03604, 2019
  • Kairouz et al., Advances and Open Problems in Federated Learning. Now Foundations and Trends, 2021
  • Geiping et al., Inverting gradients - how easy is it to break privacy in federated learning?, NeurIPS 2020
  • Yin et al., See through gradients: Image batch recovery via gradinversion, CVPR 2021
  • Blanchard et al., Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent, NeurIPS 2017
  • Guerraoui et al., Byzantine Machine Learning: A Primer. ACM Comput. Surv., August 2023


Acquis / Knowledge :

  • Knowledge about federated learning 
  • Know how to design a privacy-preserving learning algorithm


Evaluation / Assessment :

  • 1 exam mark (50%)
  • TP (Lab) mark (30%)
  • QCM mark (20%)