Date of Award

Summer 7-23-2024

Degree Type

Dissertation/Thesis

Degree Name

Masters of Science in Computer Science

Department

College of Computing and Software Engineering

Committee Chair/First Advisor

Dr. Kazi Aminul Islam

Second Advisor

Dr. Md Abdullah Al Hafiz Khan

Third Advisor

Dr. Rifatul Islam

Abstract

Remote sensing-based sensors are important in diagnosing remote objects for security and sensitive government installations, including the Department of Defense (DOD), Depart- ment of Homeland Security (DHS) and the Environmental Protection Agency (EPA). In the past, a human operator was required to apply their judgment to perform the mapping and monitoring of objects via remote sensing sensors, including multi-spectral images, which required careful and tedious prior selection. However, with deep learning, these features in the multi-spectral, multi-temporal, and multiple-modality remote sensing data sets are auto- matically learned. Despite their benefits, these deep learning models are often termed black box models due to their opacity in decision-making. This lack of transparency can hinder user trust and compromise the security of these models. Choosing a better explainable AI (XAI) approach is crucial as it provides insights into model decisions, thereby improv- ing the trust and security of deep learning models in critical applications. A well-chosen XAI method enhances the understanding of how models make decisions, identifying poten- tial biases and vulnerabilities that could be exploited in adversarial attacks. Explainability helps in diagnosing errors, ensuring compliance with regulatory standards, and facilitating human oversight. Furthermore, interpretable models can enhance collaboration between AI systems and human experts, leading to better decision-making processes. In high-stakes environments understanding how and why a model makes certain decisions can prevent critical errors and improve response strategies.

Our research aims to improve the interpretability and reliability of the algorithms in remote sensing applications. We conducted a comparative analysis of two explainable AI models, Grad-CAM and Score-CAM, to interpret decisions in multi-spectral datasets such as UC-Merced and EuroSAT and established an automated approach to determine which explainable AI method performs better. This automated approach involves evaluating the explainable AI methods using specific metrics: ROAD, SIC, and infidelity. Our empiri- cal analysis demonstrated that, on average, Score-CAM outperformed Grad-CAM, as ev- idenced by these metrics. By systematically utilizing these evaluation metrics, we devel- oped a method for reliably identifying the superior explainable AI approach. This can help researchers with a clear choice for enhancing model interpretability and trustworthiness.

In addition, we responded appropriately to one of the most important threats in AI ad- versarial examples, which are designed to alter benign data inputs in a way that deceives AI algorithms to compromise the security of remote sensing systems. To address these threats, we devised an adversarial robustness approach that allows for the correct predic- tion of the accurate AI model even when adversarial perturbations have been added. We incorporated explainable-AI guided features & data augmentation methods to build a ro- bust AI model against adversarial attacks. As for the results, we were able to show that our proposed approach was more resistant to adversarial attacks, especially Projected Gradient Descent (PGD), in the EuroSAT and AID datasets. This integrated effort advances not only the interpretability, but also the robustness of AI models in the context of remote sensing, opening the field up to improved safety and efficiency.

Available for download on Friday, July 23, 2027

Share

COinS