Location

https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php

Streaming Media

Event Website

1. Information about the project. The inference attack is an adversarial algorithm that identifies items in training data. When a model is given, it identifies or recovers training data. And this can be applied to machine learning-based services that are currently in business. This poses a serious privacy issue. Even with the emergence of federated learning, an inference attack is still possible. The only solution to the inference attack is differential privacy, which is augmenting noise to the data to fail the attack. However, differential privacy inherently decreases the performance of the model. The research started with the idea to construe a machine learning algorithm that is both safe and performs well.2. Information about team membershttps://www.linkedin.com/in/hongkyu-lee-9b28ab20a/

Document Type

Event

Start Date

26-4-2021 5:00 PM

Description

Machine learning (ML) algorithms require a massive amount of data. Firms such as Google and Facebook exploit user's data to deliver a more precise ML-based service. However, collecting users' data is a risky action because their private data can be leaked through the transmission. As a remedy, federated learning is introduced. In federated learning, a central server distributes a machine learning model to users. Each user trains the model to its data, and send the model back. Later the models are aggregated and distributed again. Federated learning is more secure in that it emancipates users from the risk of sending private data directly. Recently, several researchers have identified that federated learning is vulnerable to inference attacks. The inference attack is an adversarial algorithm that identifies the training data only by inspecting an ML model. A successful attack will allow an attacker to know the private data of users. We proposed defensive federated learning, the federated learning that deters inference attack. The defensive federated learning hardens the inference attack and obfuscates original private data into an unrecognizable form to human eyes. Thus, the success rate of the inference attack decreases, and even if the attack is successful, what the attacker can see is distorted data that is not decipherable. What important is, even if the proposed scheme distorts the original data, it still learns from the distorted data and achieves high classification accuracy. We showed that our proposed scheme achieved higher model performance and stronger toleration than differential privacy, which is the only solution for the inference attack.Advisors(s): Dr. Junggab SonTopic(s): Other (explain in the comments section)N/A

Share

COinS
 
Apr 26th, 5:00 PM

GR-34 Defensive Neural Network

https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php

Machine learning (ML) algorithms require a massive amount of data. Firms such as Google and Facebook exploit user's data to deliver a more precise ML-based service. However, collecting users' data is a risky action because their private data can be leaked through the transmission. As a remedy, federated learning is introduced. In federated learning, a central server distributes a machine learning model to users. Each user trains the model to its data, and send the model back. Later the models are aggregated and distributed again. Federated learning is more secure in that it emancipates users from the risk of sending private data directly. Recently, several researchers have identified that federated learning is vulnerable to inference attacks. The inference attack is an adversarial algorithm that identifies the training data only by inspecting an ML model. A successful attack will allow an attacker to know the private data of users. We proposed defensive federated learning, the federated learning that deters inference attack. The defensive federated learning hardens the inference attack and obfuscates original private data into an unrecognizable form to human eyes. Thus, the success rate of the inference attack decreases, and even if the attack is successful, what the attacker can see is distorted data that is not decipherable. What important is, even if the proposed scheme distorts the original data, it still learns from the distorted data and achieves high classification accuracy. We showed that our proposed scheme achieved higher model performance and stronger toleration than differential privacy, which is the only solution for the inference attack.Advisors(s): Dr. Junggab SonTopic(s): Other (explain in the comments section)N/A

https://digitalcommons.kennesaw.edu/cday/spring/graduateresearch/5