CS SEMINAR

Surviving Poisoning and Brokering Agreements in Distributed ML

Speaker
Associate Professor Ivan Beschastnikh, University of British Columbia

10 Mar 2020 Tuesday, 10:00 AM to 11:30 AM

Executive Classroom, COM2-04-02

Abstract:
The decentralization of ML procedures is driven by growing security concerns and scalability challenges. For example, federated learning is a state of the art proposal that has been adopted in production at Google. However, such decentralization opens the door for malicious clients to participate in training.

In this talk, I will discuss two projects from our group in this space.

The first project, FoolsGold, attempts to play nice with existing systems and minimally augments the SGD algorithm in the context of federated learning to provide protection against sybil-based poisoning attacks. In the second project, we propose a brokering system that mediates the interaction between clients (that have data) and a service (that wants the model). This new organization of distributed learning enables better guarantees for all the participants.

The two projects have corresponding papers that are available online:
https://arxiv.org/pdf/1808.04866
https://www.cs.ubc.ca/~bestchai/papers/apsys19-brokering-extended.pdf


Biodata:
Ivan Beschastnikh is an Associate Professor in the Department of Computer Science at the University of British Columbia. He finished his PhD at the University of Washington in 2013 and received his formative training at the University of Chicago. He has broad research interests that touch on systems, formal methods, privacy, and security:
Visit his homepage to learn more: http://www.cs.ubc.ca/~bestchai/