The Impact of Adversarial Attacks on Remote Sensing Applications

Abstract (300 words maximum)

Remote sensing systems are increasingly utilized in automated surveillance and detection applications. However, the robustness of these systems is challenged by adversarial attacks—deliberate manipulations of input data that lead machine learning models to misclassify or fail to detect critical objects. This research investigates the vulnerability of remote sensing systems to adversarial perturbations, focusing on datasets designed for object classification in remote sensing contexts. We will implement adversarial techniques, such as the Fast Gradient Sign Method (FGSM), to systematically compromise detection accuracy. In parallel, we will develop and evaluate defense mechanisms, including adversarial training, to enhance system resilience. The findings will identify the impact of adversarial attacks on the machine learning model's performance and provide recommendations for strengthening the security and reliability of remote sensing technologies. This work aims to contribute to the development of more secure machine learning systems, with implications for improving both public and private safety infrastructures.

Academic department under which the project should be listed

CCSE - Computer Science

Primary Investigator (PI) Name

Kazi Aminul Islam

This document is currently not available here.

Share

COinS
 

The Impact of Adversarial Attacks on Remote Sensing Applications

Remote sensing systems are increasingly utilized in automated surveillance and detection applications. However, the robustness of these systems is challenged by adversarial attacks—deliberate manipulations of input data that lead machine learning models to misclassify or fail to detect critical objects. This research investigates the vulnerability of remote sensing systems to adversarial perturbations, focusing on datasets designed for object classification in remote sensing contexts. We will implement adversarial techniques, such as the Fast Gradient Sign Method (FGSM), to systematically compromise detection accuracy. In parallel, we will develop and evaluate defense mechanisms, including adversarial training, to enhance system resilience. The findings will identify the impact of adversarial attacks on the machine learning model's performance and provide recommendations for strengthening the security and reliability of remote sensing technologies. This work aims to contribute to the development of more secure machine learning systems, with implications for improving both public and private safety infrastructures.