Location
https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php
Document Type
Event
Start Date
24-11-2025 4:00 PM
Description
Remote sensing is the science of acquiring information about the Earth's surface using satellite-mounted imaging sensors. In the past, this data had to be interpreted manually, which was slow, tedious, and often prone to error. With the rise of deep learning, image classification models have greatly accelerated and improved remote sensing tasks such as land-use analysis, environmental monitoring, etc. However, despite their strong performance, these models are still vulnerable to adversarial patch attacks—physically realizable patterns that, when placed on an object, can force the model to make incorrect predictions. This creates serious risks for practical geospatial applications. Traditional defenses like Projected Gradient Descent Adversarial Training (PGD-AT) can improve robustness but require long training times, heavy computation, and powerful hardware. This study presents a more efficient defense framework that offers stronger protection against adversarial patches while using only a fraction of the resources. Our method achieves higher robustness than PGD-AT and reduces training time by nearly an order of magnitude, making it highly suitable for real-world, resource-constrained remote sensing systems.
Included in
GRM-0237 Efficient Defense Against Adversarial Patch Attacks in Remote Sensing Using Transfer Learning
https://www.kennesaw.edu/ccse/events/computing-showcase/fa25-cday-program.php
Remote sensing is the science of acquiring information about the Earth's surface using satellite-mounted imaging sensors. In the past, this data had to be interpreted manually, which was slow, tedious, and often prone to error. With the rise of deep learning, image classification models have greatly accelerated and improved remote sensing tasks such as land-use analysis, environmental monitoring, etc. However, despite their strong performance, these models are still vulnerable to adversarial patch attacks—physically realizable patterns that, when placed on an object, can force the model to make incorrect predictions. This creates serious risks for practical geospatial applications. Traditional defenses like Projected Gradient Descent Adversarial Training (PGD-AT) can improve robustness but require long training times, heavy computation, and powerful hardware. This study presents a more efficient defense framework that offers stronger protection against adversarial patches while using only a fraction of the resources. Our method achieves higher robustness than PGD-AT and reduces training time by nearly an order of magnitude, making it highly suitable for real-world, resource-constrained remote sensing systems.