Security in AI based US Healthcare System

Disciplines

Artificial Intelligence and Robotics | Health Information Technology

Abstract (300 words maximum)

The growing use of AI in healthcare enables doctors to swiftly uncover illness patterns that they might have overlooked initially, especially in medical images such as X-rays. However, health records are meant only for patients and their doctors. Yet, with advancing technology, attackers can manipulate sensitive data, risking undetected life-threatening diseases due to adversarial attacks on AI models. An attacker can manipulate state-of-the-art medical image classification models using the Fast Gradient Sign Method (FGSM) attack. For instance, attackers might introduce subtle changes to medical images, e.g., skin cancer moles [1], to misguide a deep convolutional neural network (CNN) to produce incorrect decisions. Although this attack is undetectable to the human eye, it is still strategically designed to mislead the CNN model into misclassifying the images. As a result, the model misdiagnoses the underlying disease leading to erroneous treatment recommendations or decisions. Our purpose for this project is to identify adversarial attacks' impact on healthcare systems and then to identify prevention techniques for these attacks, especially in healthcare systems where solutions are scarce, and attacks are constantly increasing. We will build a framework for diagnosing adversarial attack threat vectors (data source, machine learning model, and machine learning pipeline) in healthcare systems and propose mitigation techniques against these adversarial attacks. This study will guide us in ensuring safe and secure healthcare systems.

References:

[1] Selvakkumar, Arawinkumaar, Shantanu Pal, and Zahra Jadidi. "Addressing adversarial machine learning attacks in smart healthcare perspectives." In Sensing Technology: Proceedings of ICST 2022, pp. 269-282. Cham: Springer International Publishing, 2022.

[2] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).

Academic department under which the project should be listed

CCSE - Computer Science

Primary Investigator (PI) Name

Kazi Aminul Islam

This document is currently not available here.

Share

COinS
 

Security in AI based US Healthcare System

The growing use of AI in healthcare enables doctors to swiftly uncover illness patterns that they might have overlooked initially, especially in medical images such as X-rays. However, health records are meant only for patients and their doctors. Yet, with advancing technology, attackers can manipulate sensitive data, risking undetected life-threatening diseases due to adversarial attacks on AI models. An attacker can manipulate state-of-the-art medical image classification models using the Fast Gradient Sign Method (FGSM) attack. For instance, attackers might introduce subtle changes to medical images, e.g., skin cancer moles [1], to misguide a deep convolutional neural network (CNN) to produce incorrect decisions. Although this attack is undetectable to the human eye, it is still strategically designed to mislead the CNN model into misclassifying the images. As a result, the model misdiagnoses the underlying disease leading to erroneous treatment recommendations or decisions. Our purpose for this project is to identify adversarial attacks' impact on healthcare systems and then to identify prevention techniques for these attacks, especially in healthcare systems where solutions are scarce, and attacks are constantly increasing. We will build a framework for diagnosing adversarial attack threat vectors (data source, machine learning model, and machine learning pipeline) in healthcare systems and propose mitigation techniques against these adversarial attacks. This study will guide us in ensuring safe and secure healthcare systems.

References:

[1] Selvakkumar, Arawinkumaar, Shantanu Pal, and Zahra Jadidi. "Addressing adversarial machine learning attacks in smart healthcare perspectives." In Sensing Technology: Proceedings of ICST 2022, pp. 269-282. Cham: Springer International Publishing, 2022.

[2] Goodfellow, Ian J., Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).