Semester of Graduation
Fall 2025
Degree Type
Dissertation/Thesis
Degree Name
Master of Science in Computer Science
Department
College of Computing and Software Engineering
Committee Chair/First Advisor
Dr. Kazi Aminul Islam
Second Advisor
Dr. Md Abdullah Al Hafiz Khan
Third Advisor
Dr. Sahidul Islam
Abstract
Remote sensing is the science of extracting meaningful information from satellite imagery and plays a crucial role in environmental monitoring, land-use analysis, and disaster response. Traditional manual interpretation methods were often slow, labor-intensive, and error-prone, but the adoption of deep learning has significantly improved the accuracy and scalability of remote-sensing image classification. Deep learning image classification models have achieved state-of-the-art performance in terms of accuracy, greatly enhancing the scalability and reliability of remote-sensing analysis. However, despite these advances, such models remain vulnerable to adversarial attacks—deliberately crafted perturbations designed to mislead predictions. Adversarial patch attacks pose an even greater threat, as physically realizable patches can be placed directly on objects.
Adversarial training is a common defense, where models are trained using adversarial examples to improve robustness. However, this process is computationally heavy because adversarial examples must be generated throughout training. These challenges motivate the need for robustness-transfer strategies that reduce training cost while maintaining or improving resistance to adversarial patch attacks.
This study investigates robustness transfer as a defense strategy, aiming to strengthen the resilience of remote-sensing classifiers against adversarial patch attacks. To address this challenge, we propose two complementary approaches. The first is a transfer learning–based method that leverages pretrained adversarially robust models, enabling the reuse of robustness without the heavy computational cost typically associated with adversarial training. This approach significantly reduces training time while still providing meaningful improvements in robustness.
The second contribution is a novel Multi-Teacher Feature Matching (MTFM) framework, designed to align the feature representations of a student model with those of both clean and adversarially robust teacher models. By jointly distilling knowledge from multiple teachers, the MTFM framework encourages the student model to learn feature spaces that balance discriminative power and robustness. This results in an improved trade-off between clean accuracy and defense performance against adversarial patch attacks.
Across diverse datasets and model architectures, the MTFM method consistently outperform standard, non-robust baselines and closely match—or in several cases surpass—existing defense strategies. Notably, these gains are achieved with substantially lower training effort than conventional adversarial defenses. Overall, the findings underscore the promise of robustness-aware knowledge transfer as a scalable, efficient, and practical pathway toward building resilient geospatial AI systems.