Abstract: This talk will have two parts. In the first half of the talk, I’ll discuss ways that adversarial machine learning can be used to protect or infringe upon the privacy of users. This includes methods for deincentivizing data scraping by creating “unlearnable” data that cannot be used for model training. In the second half of the talk, I’ll present my recent work on “thinking systems” for symbolic reasoning. One important reasoning capability is logical extrapolation, in which models trained only on small/simple reasoning problems can synthesize complex algorithms that scale up to large/complex problems at test time. We consider inference processes in which a logical reasoning problem is represented in memory and then iteratively manipulated and simplified over time until a solution to a problem is found. When recurrent models are trained only on “easy” problem instances, they synthesize and represent algorithms. These neural algorithms can then solve “hard” problem instances without having ever seen one, provided the model is allowed to “think” for longer at test time.
Bio: Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland. His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty award, and a Sloan Fellowship.