[Deepmind] Understanding deep learning through neuron deletion
Mar 23, 2018
Deepmind’s attempt to analyze neural networks by measuring the impact of damaging the network by deleting individual neurons.
quote from the blog post
We measured the performance impact of damaging the network by deleting individual neurons as well as groups of neurons. Our experiments led to two surprising findings:
-
Although many previous studies have focused on understanding easily interpretable individual neurons (e.g. “cat neurons”, or neurons in the hidden layers of deep networks which are only active in response to images of cats), we found that these interpretable neurons are no more important than confusing neurons with difficult-to-interpret activity.
-
Networks which correctly classify unseen images are more resilient to neuron deletion than networks which can only classify images they have seen before. In other words, networks which generalise well are much less reliant on single directions than those which memorise.
https://deepmind.com/blog/article/understanding-deep-learning-through-neuron-deletion
← Back to all articles Quick Navigation: Next:[ j ] – Prev:[ k ] – List:[ l ]