Systems | Information | Learning | Optimization
 

Large Deviations and Extensions for Learning Tree Models

Learning the structure of graphical models from data is a fundamental task in many scientific domains. I will describe analysis and applications of learning graphical models. We analyze the theoretical properties of a well-known structure learning algorithm known as the Chow-Liu algorithm (1968). The Chow-Liu algorithm learns the maximum-likelihood (ML) tree structure from independently drawn observations of a multivariate discrete tree distribution. Using the theory of large-deviations, we analyze the error exponent that the ML-estimate of the Markov tree structure differs from the true tree. By exploiting the tree structure, we establish that the error exponent is equal to the exponential rate of decay of a single dominant crossover event. Using ideas from Euclidean information theory, we then analyze the scenario of ML-estimation in the very-noisy learning regime and show that the error exponent can be approximated the signal-to-noise ratio (SNR) for learning tree distributions. Extensions to Gaussian graphical models also allow us to quantify the relative ease of certain classes of trees. More specifically, we are able to conclude that star graphs are the “hardest” to learn, while Markov chains are the “easiest” to learn.

I will also discuss extensions to learning latent tree models, models in which only a subset of variables are observed. Consistent and low computational and sample complexity algorithms are derived. If time permits, I will talk about some applications in medical analysis.

Joint work with Alan Willsky, Anima Anandkumar, Myung Jin Choi, Microsoft Research Cambridge and the University of Manchester.

October 14 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Vincent Tan