L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
Aug 16, 2018
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
-
Chen, Jianbo;
-
Song, Le;
-
Wainwright, Martin J.;
-
Jordan, Michael I.;
Abstract: We study instancewise feature importance scoring as a method for model interpretation. Any such method yields, for each predicted instance, a vector of importance scores associated with the feature vector. Methods based on the Shapley score have been proposed as a fair way of computing feature attributions of this kind, but incur an exponential complexity in the number of features. This combinatorial explosion arises from the definition of the Shapley value and prevents these methods from being scalable to large data sets and complex models. We focus on settings in which the data have a graph structure, and the contribution of features to the target variable is well-approximated by a graph-structured factorization. In such settings, we develop two algorithms with linear complexity for instancewise feature importance scoring. We establish the relationship of our methods to the Shapley value and another closely related concept known as the Myerson value from cooperative game theory. We demonstrate on both language and image data that our algorithms compare favorably with other methods for model interpretation.
https://arxiv.org/abs/1808.02610v1
← Back to all articles Quick Navigation: Next:[ j ] – Prev:[ k ] – List:[ l ]