In federated learning, edge nodes collaboratively build learning models from locally generated data. Federated learning (FL) introduces several unique challenges to traditional learning including (i) need for privacy guarantees on the locally residing data (ii) communication efficiency from edge devices (iii) robustness to malicious/malfunctioning nodes (iv) need for personalization given heterogeneity in data and resources. In this talk we focus on privacy and personalization.
We will first describe some of our recent work on trade-offs between privacy and learning performance for federated learning in the context of the shuffled privacy models. Our goals include accounting for (client) sampling, obtaining better compositional bounds (using Renyi DP) as well as ensuring communication efficiency. We will briefly present our theoretical results along with numerics.
Statistical heterogeneity of data in FL has motivated the design of personalized learning, where individual (personalized) models are trained, through collaboration. In the second part of the talk we give a statistical framework that unifies several different personalized FL algorithms as well as suggest new algorithms. We develop novel private personalized estimation under this framework. We then use our statistical framework to propose new personalized learning algorithms, including AdaPeD based on information-geometry regularization, which numerically outperforms several known algorithms.
Parts of this talk are joint work with Kaan Ozkara, Antonious Girgis and Deepesh Data, Peter Karouz and Theertha Suresh, and has appeared in AISTATS, NeurIPS, ACM CCS, ICLR etc.
Biography:
Suhas Diggavi is currently a Professor of Electrical and Computer Engineering at UCLA. His undergraduate education is from IIT, Delhi and his PhD is from Stanford University. He has worked as a principal member research staff at AT&T Shannon Laboratories and directed the Laboratory for Information and Communication Systems (LICOS) at EPFL. At UCLA, he directs the Information Theory and Systems Laboratory.
His research interests include information theory and its applications to several areas including machine learning, security & privacy, wireless networks, data compression, cyber-physical systems, bio-informatics and neuroscience; more information can be found at http://licos.ee.ucla.edu.
He has received several recognitions for his research from IEEE and ACM, including the 2013 IEEE Information Theory Society & Communications Society Joint Paper Award, the 2021 ACM Conference on Computer and Communications Security (CCS) best paper award, the 2013 ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc) best paper award, the 2006 IEEE Donald Fink prize paper award among others. He was selected as a Guggenheim fellow for Natural Sciences in 2021. He also received the 2019 Google Faculty Research Award, 2020 Amazon faculty research award and 2021 Facebook/Meta faculty research award. He served as a IEEE Distinguished Lecturer and also served on board of governors for the IEEE Information theory society (2016-2021). He is a Fellow of the IEEE.
He is the Editor-in-Chief of the IEEE BITS Information Theory Magazine and has been an associate editor for IEEE Transactions on Information Theory, ACM/IEEE Transactions on Networking and other journals and special issues, as well as in the program committees of several IEEE conferences. He has also helped organize IEEE and ACM conferences including serving as the Technical Program Co-Chair for 2012 IEEE Information Theory Workshop (ITW), the Technical Program Co-Chair for the 2015 IEEE International Symposium on Information Theory (ISIT) and General co-chair for ACM Mobihoc 2018. He has 8 issued patents.