Date of Award

Spring 5-5-2020

Degree Type

Dissertation

Degree Name

Doctor of Philosophy in Analytic and Data Science

Department

Statistics and Analytical Sciences

Committee Chair/First Advisor

Jing (Selena) He

Committee Member

Meng Han

Committee Member

Gita Taasoobshirazi

Committee Member

Mohammed Korayem

Abstract

The security problem has gained increasing awareness due to the various kinds of global threats. Security analytics is the process of using streaming data acquisition, collection, and artificial intelligence algorithms for security monitoring and threat disclosure. In this dissertation work, we utilize practical data-driven security analytics to identify the potential threat and explore the robustness of the machine learning model. We focus on two aspects: (1) Security Analytics: utilize machine learning and statistical analytics tools to identify and resolve the threat in real life, such as cybersecurity, abnormal activities. (2) Analytic Security: Explore the security issues of the machine learning model itself, which increases the robustness of classifiers and prevents the potential threat from adversarial attacks.

In the first part, we demonstrate case studies in solving the security problem of categorical classification and time-series abnormal detection. In the proposed framework, we handle the incoming data by utilizing the transfer learning technique to improve detection efficiency and accuracy. Our model could allow the small-sized datasets take advantages of large amounts dataset by using the high-level representative features since this can help the traditional machine learning classifiers to reduce the prediction time and enhance the detection accuracy with it.

In the second part, we focus on the security problem of the machine learning model itself: threats of the adversarial examples. The adversarial example generates through the trained classifier, which utilizes the classifier’s gradient information and prediction results. It’s hard to distinguish the difference between adversarial examples from the original data from human perception; however, the adversarial examples could fool the well-trained classifier to get the wrong prediction quickly. For the adversarial attacks, we proposed two attack algorithms: superpixel attack and border attack, which trick the classifier with high confidence. The attack algorithm reveals the truth that machine learning classifiers and deep learning networks (DNNs) need defensive procedures to improve the robustness since the state-of-the-art models are widely applied in daily activities. These extensive deep learning applications are facing crucial security problems, especially for computer vision, voice translation, and text mining, etc. To further improve the robustness of DNNs, we develop a novel adversarial defense framework utilizing the hidden layer representations statistical signature to detect the adversarial examples. Our proposed method could identify potential threat data with light computation cost - without the knowledge of the corresponding adversarial attack algorithm. This meaningful work builds a barrier for DNNs naturally and efficiently, which could identify various kinds of adversarial examples without data feature transformation and extra training a detection model.

Share

COinS