Brain Networks Laboratory (Choe Lab)

Adversarial Network Compression

Apr 27, 2018

Adversarial Network Compression

Abstract: Neural network compression has recently received much attention due to the computational requirements of modern deep models. In this work, our objective is to transfer knowledge from a deep and accurate model to a smaller one. Our contributions are threefold: (i) we propose an adversarial network compression approach to train the small student network to mimic the large teacher, without the need for labels during training; (ii) we introduce a regularization scheme to prevent a trivially-strong discriminator without reducing the network capacity and (iii) our approach generalizes on different teacher-student models. In an extensive evaluation on five standard datasets, we show that our student has small accuracy drop, achieves better performance than other knowledge transfer approaches and it surpasses the performance of the same network trained with labels. In addition, we demonstrate state-of-the-art results compared to other compression strategies.

https://arxiv.org/abs/1803.10750v1


← Back to all articles         Quick Navigation:    Next:[ j ] – Prev:[ k ] – List:[ l ]