But there is a problem. All these models are DENSE — that is, the expected number of edges is constant * N^2, where N is the number of vertices. What is more, the vertex degrees concentrate around a constant or a finite set of constants. Real networks we encounter are not like this. They are typically sparse (number of edges = o(N^2) or even O(N)) and degrees vary widely. There are various ad hoc ways of generating graphs with these properties but no general theory of random graph models, descriptive statistics for such graphs, etc.

Then last week Henry Cohn told me about the development by him, Christian Borgs, Jennifer Chayes, and Yufei Zhao at Microsoft Research of the theory of L^p graphons, a more flexible model which seems very well-positioned to capture the feature of real-world large graphs.

I will give an intro to their paper and present a bunch of questions which I will bet nobody has really worked on yet and which I think are important.

Discovery Building, Orchard View Room

Jordan Ellenberg