Systems | Information | Learning | Optimization
 

Hardware Optimizations for Optimization | Mirror Descent for Metric Learning

Victor’s Abstract:
How we used system programming techniques to tune convex optimization solvers. I demonstrate how understanding the hardware can influence the runtime of Nonnegative Matrix Factorization (NMF).

————————————-
Gautam’s Abstract:
Most metric learning methods are characterized by diverse loss functions and projection methods, which naturally begs the question: is there a wider framework that can generalize many of these methods? In addition, ever persistent issues are those of scalability to large data sets and the question of kernelizability. We propose a unified approach to Mahalanobis metric learning: an online regularized metric learning algorithm based on the ideas of composite objective mirror descent (COMID). The metric learning problem is formulated as a regularized positive semidefinite matrix learning problem, whose update rules can be derived using the comid framework. This approach aims to be scalable, kernelizable, and admissible to many different types of Bregman and loss functions, which allows for the tailoring of several different classes of algorithms. The most novel contribution is the use of the trace norm, which yields a sparse metric in its eigenspectrum, thus simultaneously performing feature selection along with metric learning.

February 5 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Gautam Kunapuli, Victor Bittorf