Privacy-Preserving Multimodal Sentiment Analysis
Abstract (300 words maximum)
With the world's future changing and the use of technology constantly increasing, cybersecurity professionals need to understand that it is not just the passwords or cards of their clients that are at stake anymore. With AI becoming a mainstay in the tech scene, the security risk it could pose to regular people through acts like deepfakes, or more advanced voice copying technology, is immeasurable. This is what our lab aims to fix. The Privacy Preserving Model Lab plans to use an AI model, created and trained, to encrypt picture, video, and voice sample data to increase the security of both companies and the average citizen. Today, a cybercriminal can steal your pictures and voice and use them to become you, taking full control over your online life and accounts. We focus on training our AI model to become quicker in the encryption of the data, as well as more secure. As even though the use of these sophisticated models has proven to be effective, if something slips, data we gather could unintentionally reveal personal information such as someone’s identity or location Thanks to the use of technology like the Raspberry Pi and the AI Hat+ attachment, we’d be able to test our model more rigorously, allowing for more data and stress to go into it and allowing us to see and share our progress more effectively. In today's day and age, people gain access to their devices through technology like facial recognition, and people use vocal passwords for their bank. We believe that it’s better to stay ahead of any possible threat that could arise in the future, preferably making adjustments to our model when needed, than to fear not being prepared when these new types of cyber attacks eventually come.
Academic department under which the project should be listed
CCSE - Information Technology
Primary Investigator (PI) Name
Honghui Xu
Privacy-Preserving Multimodal Sentiment Analysis
With the world's future changing and the use of technology constantly increasing, cybersecurity professionals need to understand that it is not just the passwords or cards of their clients that are at stake anymore. With AI becoming a mainstay in the tech scene, the security risk it could pose to regular people through acts like deepfakes, or more advanced voice copying technology, is immeasurable. This is what our lab aims to fix. The Privacy Preserving Model Lab plans to use an AI model, created and trained, to encrypt picture, video, and voice sample data to increase the security of both companies and the average citizen. Today, a cybercriminal can steal your pictures and voice and use them to become you, taking full control over your online life and accounts. We focus on training our AI model to become quicker in the encryption of the data, as well as more secure. As even though the use of these sophisticated models has proven to be effective, if something slips, data we gather could unintentionally reveal personal information such as someone’s identity or location Thanks to the use of technology like the Raspberry Pi and the AI Hat+ attachment, we’d be able to test our model more rigorously, allowing for more data and stress to go into it and allowing us to see and share our progress more effectively. In today's day and age, people gain access to their devices through technology like facial recognition, and people use vocal passwords for their bank. We believe that it’s better to stay ahead of any possible threat that could arise in the future, preferably making adjustments to our model when needed, than to fear not being prepared when these new types of cyber attacks eventually come.