Semester of Gradation

Summer 2025

Degree Type

Dissertation

Degree Name

Doctor of Philosophy in Data Science and Analytics

Department

DATA SCIENCE AND ANALYTICS

Committee Chair/First Advisor

Sherry Ni

Second Advisor

Linh Le

Third Advisor

Jonathan Boardman

Fourth Advisor

Xinyan Zhang

Abstract

As deep neural networks grow increasingly powerful, concerns about their opacity and interpretability escalate, hindering their trustworthiness in high-stakes scenarios. eXplainable AI (XAI) methods have emerged to enhance transparency and accountability in neural networks, emphasizing interpretability through global feature significance, which seeks to quantify feature importance at the dataset level (i.e., across the range of possible model inputs), and local feature significance, which seeks to quantify feature importance at the datapoint level (i.e., for an individual prediction). A critical yet overlooked aspect in local feature significance research, as well as local explainability methods more broadly, is the selection of appropriate baselines for attribution methods. This dissertation addresses these dimensions by proposing rigorous methodologies: (1) a permutation-based testing framework for global feature significance, uniquely permuting the target variable to robustly handle nonlinear relationships and multicollinearity without restrictive assumptions; (2) statistical significance tests and confidence intervals for local feature attribution methods, including Integrated Gradients, DeepLIFT, SHAP, and LIME, providing robust validation of individual feature contributions; and (3) a generative contrastive baseline approach, enabling more precise and actionable explanations. Together, these methodologies significantly advance XAI, integrating statistical rigor with practical applicability to promote transparency, accountability, and responsible use of AI models.

Available for download on Tuesday, July 27, 2027

Share

COinS