Is our familiar Euclidean space and its linear structure always the right place for machine learning? Recent research argues otherwise: it is not always needed and sometimes harmful, as demonstrated by a wave of exciting work. Starting with the notion of hyperbolic representations for hierarchical data, a major push has resulted in new ideas for representations in non-Euclidean spaces, new algorithms and models with non-Euclidean data and operations, and new perspectives on the underlying functionality of non-Euclidean ML. In this talk, we review some of these exciting ideas. We focus on the fundamental tradeoffs involved in producing hyperbolic embeddings, in particular the tradeoff between faithfulness, graph properties, embedding dimension, and numerical precision. We propose new spaces for representations via product manifolds, and apply these ideas to practical tasks in knowledge base embeddings. Finally, we consider some key challenges on how to make all types of models function in non-Euclidean spaces.
Joint work with Albert Gu, Ines Chami, Chris Ré (Stanford) and Chris De Sa (Cornell).
November 18 @ 12:30
12:30 pm (1h)