From a theoretical standpoint, an elegant approach for designing statistically consistent learning algorithms is via the design of convex calibrated surrogate losses. From a practical standpoint, an approach that is often favored is that of output coding, which reduces multiclass learning to a set of simpler binary classification problems. In this talk, I will discuss recent progress in bringing together these seemingly disparate approaches under a unifying lens to develop statistically consistent and computationally efficient learning algorithms for a wide range of problems, in some cases recovering existing state-of-the-art algorithms, and in other cases providing new ones. Our algorithms require learning at most r real-valued scoring functions, where r is the rank of the target loss matrix, and come with corresponding principled decoding schemes. I will also discuss connections with the field of property elicitation, and new tools for deriving quantitative regret transfer bounds via strongly proper losses.
Bio: Shivani Agarwal is Rachleff Family Associate Professor of Computer and Information Science at the University of Pennsylvania, where she also directs the NSF-sponsored Penn Institute for Foundations of Data Science (PIFODS) and co-directs the Penn Research in Machine Learning (PRiML) center. She is currently an Action Editor for the Journal of Machine Learning Research and an Associate Editor for the Harvard Data Science Review, and served as Program Co-chair for COLT 2020. Her research interests include computational, mathematical, and statistical foundations of machine learning and data science; applications of machine learning in the life sciences and beyond; and connections between machine learning and other disciplines such as economics, operations research, and psychology. Her group’s research has been selected four times for spotlight presentations at the NeurIPS conference.
Orchard View Room, Virtual
Shivani Agarwal