Systems | Information | Learning | Optimization
 

How to Poison Linear Regression

I will use linear regression as a guinea pig to illustrate data poisoning attacks in adversary machine learning. An adversary attempts to fool linear regression into learning some wrong regression coefficients: perhaps customers are more satisfied the longer they sit in your waiting room, or maybe Wisconsin isn’t warming. The adversary does so by modifying — poisoning — part of the training data. I will show feasible — and sometimes optimal — poisoning on the guinea pig and friends. Such research is laden with open problems for our optimization and control theory colleagues. My talk may make you ponder the poisons and antidotes on bigger animals such as deep nets. No Animals Were Harmed in the Making of This Talk.
September 5 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Jerry Zhu