Speaker: Kassem Fawaz
Adversarial Face Obfuscation: Effectiveness and Fairness Properties
Advances in deep learning have made face recognition technologies pervasive. While useful to social media platforms and users, this technology carries significant privacy threats. Coupled with the abundant information they have about users, service providers can associate users with social interactions, visited places, activities, and preferences–some of which the user may not want to share. Additionally, facial recognition models used by various agencies are trained by data scraped from social media platforms. This talk discusses recent advances in adversarial machine learning-based approaches to mitigate these privacy risks. First, we introduce Face-Off, a privacy-preserving framework that introduces strategic perturbations to the user’s face to prevent it from being correctly recognized. Face-Off overcomes a set of challenges related to the black-box nature of commercial face recognition services, and the scarcity of literature for adversarial examples in metric networks. Second, the talk discusses the fairness issues associated with Face Obfuscation systems generally. We show that metric embedding networks are demographically aware; they cluster faces in the embedding space based on their demographic attributes. We observe that this effect carries through to face obfuscation systems: faces belonging to minority groups incur reduced utility compared to those from majority groups. For example, the disparity in average obfuscation success rate on the online Face++ API can reach up to 20 percentage points. We present an intuitive analytical model to provide insights into these phenomena.
February 2 @ 12:30
12:30 pm (1h)
Orchard View Room, Virtual