ESCAI: Efficient and Secure Collaborative AI

Project Description

Collaborative (federated) machine learning has recently emerged as a promising approach for building machine learning models using distributed training data held by many parties.  In this setting, the training algorithm is also distributed, and participants repeatedly exchange information about their data in the context of the AI task, through some aggregator servers.  The objective of such an algorithm is to enable all participants converge to a global model, while their data remain local.  Thus, this approach is very attractive to parties that own sensitive data, and agree on performing a common AI task, yet are unwilling to pull their data together for centralized training of a model.

There are severe obstacles limiting the widespread deployment of secure, efficient, and truly privacy-preserving collaborative AI.  Recent research results show that the existing collaborative AI algorithms can leak a significant amount of sensitive information about local datasets, are not robust to noisy and heterogeneous data, are susceptible to adversarial interventions, and impose significant communication and computation costs on the participants.

In this project, we propose Efficient and Secure Collaborative Artificial Intelligence (ESCAI), a framework for large-scale distributed machine learning with low computation and communication overhead, provable data privacy, and strong robustness guarantees against adversarial entities.

Research Technical Area

  • Data Privacy in Federated Learning
  • Robustness in Federated Learning
  • Low-power Machine Learning

Benefits to the society

Privacy is an enabler for data science. Different organizations can benefit from each others’ data, when a secure and privacy-preserving platform enables them to share their non-sensitive information without leaking their sensitive data. Our project provides an efficient collaborative platform for building large-scale AI algorithms over distributed data. This is of great interest to the finance sector, government agencies, small businesses, and mobile/IoT applications, which collect sensitive data and benefit from collaborating with other parties in order to build a smart system with collective intelligence.

Team’s Principal Investigator

Assistant Professor Reza Shokri

National University of Singapore

 
Reza Shokri is a NUS Presidential Young Professor of Computer Science. His research focuses on data privacy and trustworthy machine learning. He has worked on theoretical analysis of privacy and security, as well as design and implementation of secure and privacy preserving technologies, for various applications in mobile networks, databases, machine learning as a service, and federated learning. He has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018, for his work on analyzing the privacy risks of machine learning technologies, and has been the runner-up for the same award in 2012, for his work on developing a framework for quantifying location privacy. He received his PhD from EPFL.

 

Recent Notable Awards

  • NUS Presidential Young Professorship, 2019
  • The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, 2018
  • Runner-up for the PET Award for Outstanding Research in Privacy Enhancing Technologies, 2012

The Team

Co-Principal Investigator

Professor Li Shiuan Peh – National University of Singapore
Research Areas: Low-power hardware architectures, Networked chips and networked systems

 

Collaborators

Assistant Professor Jonathan Scarlett – National University of Singapore
Research Interests: Machine learning, information theory, high-dimensional statistics


Associate Professor Daniel Seng  – National University of Singapore
Research Interests: Information technology law, privacy and data protection law, machine learning techniques with particular application in the law