Systems | Information | Learning | Optimization
 

Have you heard the good news about L^p graphons?

We talk a lot in this seminar about random graphs, and by this we often mean random graphs in the Erdos – Renyi sense. In recent years, the theory of graph limits or graphons has been developed; this gives us a very nice analytic way of describing various Erdos-Renyi-like graphs. You might think of it as the limit of the stochastic block model as k goes to infinity.

But there is a problem. All these models are DENSE — that is, the expected number of edges is constant * N^2, where N is the number of vertices. What is more, the vertex degrees concentrate around a constant or a finite set of constants. Real networks we encounter are not like this. They are typically sparse (number of edges = o(N^2) or even O(N)) and degrees vary widely. There are various ad hoc ways of generating graphs with these properties but no general theory of random graph models, descriptive statistics for such graphs, etc.

Then last week Henry Cohn told me about the development by him, Christian Borgs, Jennifer Chayes, and Yufei Zhao at Microsoft Research of the theory of L^p graphons, a more flexible model which seems very well-positioned to capture the feature of real-world large graphs.

I will give an intro to their paper and present a bunch of questions which I will bet nobody has really worked on yet and which I think are important.

February 4 @ 12:30
12:30 pm (1h)

Discovery Building, Orchard View Room

Jordan Ellenberg