WeChat Mini Program
Old Version Features

Shapley Explainability on the Data Manifold.

ICLR 2021(2021)

Harvard University

Cited 170|Views28
Abstract
Explainability in AI is crucial for model development, compliance withregulation, and providing operational nuance to predictions. The Shapleyframework for explainability attributes a model's predictions to its inputfeatures in a mathematically principled and model-agnostic way. However,general implementations of Shapley explainability make an untenable assumption:that the model's features are uncorrelated. In this work, we demonstrateunambiguous drawbacks of this assumption and develop two solutions to Shapleyexplainability that respect the data manifold. One solution, based ongenerative modelling, provides flexible access to data imputations; the otherdirectly learns the Shapley value-function, providing performance and stabilityat the cost of flexibility. While "off-manifold" Shapley values can (i) giverise to incorrect explanations, (ii) hide implicit model dependence onsensitive attributes, and (iii) lead to unintelligible explanations inhigher-dimensional data, on-manifold explainability overcomes these problems.
More
Translated text
Key words
Model Interpretability,Interpretable Models,Machine Learning Interpretability,Responsibility in AI,Visual Explanations
PDF
Bibtex
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined