The Computational Complexity of Understanding Binary Classifier Decisions

Abstract

For a $d$-ary Boolean function $\Phi\colon\{0,1\}^d\to\{0,1\}$ and an assignment to its variables $\mathbf{x}=(x_1, x_2, \dots, x_d)$ we consider the problem of finding those subsets of the variables that are sufficient to determine the function value with a given probability $\delta$. This is motivated by the task of interpreting predictions of binary classifiers described as Boolean circuits, which can be seen as special cases of neural networks. We show that the problem of deciding whether such subsets of relevant variables of limited size $k \leq d$ exist is complete for the complexity class $\mathsf{NP}^\mathsf{PP}$ and thus, generally, unfeasible to solve. We then introduce a variant, in which it suffices to check whether a subset determines the function value with probability at least $\delta$ or at most $\delta-\gamma$ for $0<\gamma<\delta$. This promise of a probability gap reduces the complexity to the class $\mathsf{NP}^\mathsf{BPP}$. Finally, we show that finding the minimal set of relevant variables cannot be reasonably approximated, i.e. with an approximation factor $d^{1−\alpha}$ for $\alpha > 0$, by a polynomial time algorithm unless $\mathsf{P}=\mathsf{NP}$. This holds even with the promise of a probability gap.

Publication
Journal of Artificial Intelligence Research
Jan Macdonald
Jan Macdonald

My research is at the interface of applied and computational mathematics and scientific machine learning. I am interested in inverse problems, signal- and image recovery, and robust and interpretable deep learning.