Presenter Information

Location

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

Document Type

Event

Start Date

22-4-2026 4:00 PM

Description

Modern neural networks are typically considered black-box systems: while they are able to achieve state-of-the-art performance in many domains, it is difficult to elicit the reasons behind their decisions. From this, a sub-field of artificial intelligence called eXplainable Artificial Intelligence (XAI) arose to fill this gap. One approach to XAI is based on the symbolic compilation of a neural network's behavior to a logical formula. However, such approaches are limited in scalability, due to the fundamental difficulty of the problem. This research instead proposes an incremental and anytime approach to explaining the behavior of a neural network, for image recognition. Our approach is based on a recently published result by a KSU undergraduate, that proposed an incremental and anytime approach to explaining the behavior of an individual (threshold) neuron. To visualize the behavior of an individual neuron, we propose to enumerate prototypical examples of images that activate the neuron. We propose to visualize the behavior of a neural network by appropriately aggregating the prototypical examples of its neurons. Preliminary results suggest that such aggregate visualizations reveal interpretable patterns that can reveal the reasoning behind a neural network's decisions. Further, our approach can provide such insights in a more efficient, incremental fashion, compared to prior compilation-based methods which are by nature exhaustive.

Share

COinS
 
Apr 22nd, 4:00 PM

UR-084-219 Towards Bounding the Behavior of Neural Networks

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

Modern neural networks are typically considered black-box systems: while they are able to achieve state-of-the-art performance in many domains, it is difficult to elicit the reasons behind their decisions. From this, a sub-field of artificial intelligence called eXplainable Artificial Intelligence (XAI) arose to fill this gap. One approach to XAI is based on the symbolic compilation of a neural network's behavior to a logical formula. However, such approaches are limited in scalability, due to the fundamental difficulty of the problem. This research instead proposes an incremental and anytime approach to explaining the behavior of a neural network, for image recognition. Our approach is based on a recently published result by a KSU undergraduate, that proposed an incremental and anytime approach to explaining the behavior of an individual (threshold) neuron. To visualize the behavior of an individual neuron, we propose to enumerate prototypical examples of images that activate the neuron. We propose to visualize the behavior of a neural network by appropriately aggregating the prototypical examples of its neurons. Preliminary results suggest that such aggregate visualizations reveal interpretable patterns that can reveal the reasoning behind a neural network's decisions. Further, our approach can provide such insights in a more efficient, incremental fashion, compared to prior compilation-based methods which are by nature exhaustive.