Systems | Information | Learning | Optimization
 

Backward Feature Correction: How can Deep Learning perform Deep Learning

How does a 110-layer ResNet learn a high-complexity classifier using relatively few training examples and short training time? We present a theory towards explaining this learning process in terms of hierarchical learning. We refer to hierarchical learning as the learner learns to represent a complicated target function by decomposing it into a sequence of simpler functions, to reduce sample and time complexity. This work formally analyzes how multi-layer neural networks can perform such hierarchical learning efficiently and automatically simply by applying stochastic gradient descent (SGD) to the training objective.
We present, to the best of our knowledge, the first theory result indicating how very deep neural networks can be sample and time efficient on certain hierarchical learning tasks, even when no known non-hierarchical algorithms (such as kernel method, linear regression over feature mappings, tensor decomposition, sparse coding, and their simple combinations) are efficient. We establish a new principle called “backward feature correction” to show how the features in the lower-level layers in the network can also be improved via training higher-level layers, which we believe is the key to understand the deep learning process in multi-layer neural networks.

 

Paper: https://arxiv.org/abs/2001.04413

May 6 @ 12:30
12:30 pm (1h)

Yuanzhi Li