Statistics and Analytical Sciences

Document Type

Conference Proceeding

Submission Date



In modern society, epistemic uncertainty limits trust in financial relationships, necessitating transparency and accountability mechanisms for both consumers and lenders. One upshot is that credit risk assessments must be explainable to the consumer. In the United States regulatory milieu, this entails both the identification of key factors in a decision and the provision of consistent actions that would improve standing. The traditionally accepted approach to explainable credit risk modeling involves generating scores with Generalized Linear Models (GLMs) - usually logistic regression, calculating the contribution of each predictor to the total points lost from the theoretical maximum, and generating reason codes based on the 4 or 5 most impactful predictors. The industry standard approach is not directly applicable to a more expressive and flexible class of nonlinear models known as neural networks. This paper demonstrates that an eXplainable AI (XAI) variable attribution technique known as Integrated Gradients (IG) is a natural generalization of the industry standard to neural networks. We also discuss the unique semantics surrounding implementation details in this nonlinear context. While the primary purpose of this paper is to introduce IG to the credit industry and argue for its establishment as an industry standard, a secondary goal is to familiarize academia with the legislative constraints – including their historical and philosophical roots – and sketch the standard approach in the credit industry since there is a dearth of literature on the topic.

Included in

Data Science Commons