Abstract:
This talk examines the theoretical foundations of Graph Neural Networks (GNNs), focusing on polynomial GNNs (Poly-GNNs). We start with empirical evidence challenging the need for complex GNN architectures in semi-supervised node classification, showing simpler methods often perform comparably.
We then analyze Poly-GNNs within a contextual stochastic block model, addressing a key question: Does increasing GNN depth improve class separation in node representations? Our results show that for large graphs, the rate of class separation remains constant regardless of network depth. We demonstrate how “graph noise” can overpower other signals in deeper networks, negating the benefits of additional feature aggregation.
Our analysis employs techniques from random matrix theory, leading to combinatorics of walks and walk sequences on graphs. To obtain sharp bounds on the rate of separation, we characterize the leading term in graph noise; these insights pave the way for deriving more precise limit theorems for Poly-GNNs.
Bio:
Arash Amini is an Associate Professor of Statistics at UCLA. His research interests include high-dimensional statistics, machine learning, and optimization, for which he received an NSF CAREER award in 2020.