Brain Networks Laboratory (Choe Lab)

ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks

Jan 5, 2018

ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks

Abstract: We propose to learn a curriculum or a syllabus for supervised learning and deep reinforcement learning with deep neural networks by an attachable deep neural network, called ScreenerNet. Specifically, we learn a weight for each sample by jointly training the ScreenerNet and the main network in an end-to-end self-paced fashion. The ScreenerNet neither has sampling bias nor requires to remember the past learning history. We show the networks augmented with the ScreenerNet achieve early convergence with better accuracy than the state-of-the-art curricular learning methods in extensive experiments using three popular vision datasets such as MNIST, CIFAR10 and Pascal VOC2012, and a Cart-pole task using Deep Q-learning. Moreover, the ScreenerNet can extend other curriculum learning methods such as Prioritized Experience Replay (PER) for further accuracy improvement.

https://arxiv.org/abs/1801.00904


← Back to all articles         Quick Navigation:    Next:[ j ] – Prev:[ k ] – List:[ l ]