Systems | Information | Learning | Optimization
 

Model Inversion and other Threats in Machine Learning

I’m going to talk about some of our recent and ongoing work on topics that touch on machine learning and optimization. I’ll focus mainly on our work on model inversion attacks. Consider a machine learning model f that takes features x_1,…,x_t and produces from them a prediction y. In many contexts some features are sensitive; I’ll discuss pharmacogenetics as one such where x_t represents a person’s genetic markers. What we show is that an attacker that obtains access to f and given some subset of the other features x_1,…,x_{t-1} and a value
related to y, one can infer x_t (hence “inverting” the model). I will talk about such attacks in the case of pharmacogenetics as well as machine-learning-as-a-service settings.

Time allowing I’ll mention briefly our work on sensing in adversarial settings.

This talk will cover joint work with Matthew Fredrikson, Eric Lantz, Somesh Jha, Simon Lin, and David Page.

February 18, 2015
12:30 pm (1h)

Discovery Building, Orchard View Room

Tom Ristenpart