Abstract:
|
Artificial neural networks (ANNs) are powerful predictive models, but they provide little insight into the nature of relationships between predictors and outcomes. Many methods have been proposed to quantify and qualify the relative contributions of input features in ANNs, generally through feature importance rankings, sensitivity analyses, and input perturbation. Despite these contributions, statistical inference and hypothesis testing of feature associations remain largely unexplored. We propose a permutation-based approach to testing based on partial derivatives of the network output with respect to specific inputs. We develop two tests: one to assess the significance of the input features in a feed-forward ANN and one to test whether significant features are linearly associated with the network output. These tests enhance the explanatory power of ANNs, and combined with powerful predictive capability, extend the applicability of these models.
|