The large-scale gathering and storage of personal data is raising new questions about the regulation of privacy. On the technology side, there has been a flurry of recent work on new models for privacy risk and protection. One such model is differential privacy, which quantifies the risk to an individual’s data being included in a database. Differentially private algorithms introduce noise into their computations to limit this risk, allowing the output to be released publicly. I will describe new algorithms for differentially private machine learning tasks such as learning a classifier and principle components analysis (PCA). I will describe how guaranteeing privacy affects the performance of these algorithms, the results on real data sets, and some exciting future directions.
Parts of this work are with Kamalika Chaudhuri, Claire Monteleoni, Kaushik Sinha, Staal Vinterbo, and Aziz Boxwala.
November 7 @ 12:30
12:30 pm (1h)
Discovery Building, Orchard View Room