Systems | Information | Learning | Optimization
 

SILO: Generalized Tensor Decompositions: Algorithms and Applications

Abstract:

Tensor decompositions generalize matrix decompositions from matrix data (i.e., 2-D arrays) to tensor data (i.e., N-D arrays) and are a fundamental technique for uncovering low-dimensional structure in high-dimensional datasets, with applications across all of science and engineering. Conventional tensor decompositions seek low-rank tensors that best fit the data with respect to the least squares loss. However, other choices of loss function can sometimes be more appropriate. For example, one may have non-Gaussian data such as count data (for which a Poisson loss may be more appropriate), or one could have outliers in the data (for which a Huber loss could be more appropriate). This talk will present work on generalized tensor decompositions that allow the user (e.g., a data scientist) to select a general loss function. We will describe work on efficient algorithms, and we will see some illustrative applications (where choosing different losses gives us a way to view the data through different “lenses”).

 

Bio: 

David Hong is an Assistant Professor in the Department of Electrical and Computer Engineering at the University of Delaware. Previously, he was an NSF Postdoctoral Research Fellow in the Department of Statistics and Data Science at the University of Pennsylvania. He completed his PhD in the Department of Electrical Engineering and Computer Science at the University of Michigan, where he was an NSF Graduate Research Fellow. He also spent a summer as a Data Science Graduate Intern at Sandia National Labs. His research interests include the theoretical foundations for the low-dimensional modeling of modern high-dimensional and heterogeneous data and its many applications. Website: https://dahong.gitlab.io

April 24 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

David Hong, UDel