Location

https://www.kennesaw.edu/ccse/events/computing-showcase/fa24-cday-program.php

Streaming Media

Document Type

Event

Start Date

19-11-2024 4:00 PM

Description

In the era of social media, sentiment analysis has emerged as a vital instrument for comprehending public opinion, especially on sites like LinkedIn and Twitter. Because user-generated content is informal and noisy, traditional sentiment classification techniques like Naive Bayes and Support Vector Machines sometimes find it difficult to capture context, sarcasm, and long-term interdependence. In order to improve sentiment analysis accuracy for social media datasets with a specific focus on sentiments related to corporate layoffs, this study suggests an encoder-only transformer model. Our method successfully captures intricate phrase patterns and contextual subtleties in textual data by leveraging the self-attention mechanism built into transformer designs. To assess the model’s performance on unseen data, we used the LinkedIn dataset for testing and the Twitter dataset for training. To help the model understand the semantic linkages in the text, we used Word2Vec for tokenization and representation. Our research indicates that the transformer model works noticeably better than conventional sentiment analysis methods, exhibiting enhanced resilience and flexibility when dealing with colloquial language and conflicting emotions. This development could have an impact on businesses and organizations looking to use social media insights on layoffs to make data-driven decisions.

Share

COinS
 
Nov 19th, 4:00 PM

GMR-215 Efficient Sentiment Analysis using Encoder-only Transformer

https://www.kennesaw.edu/ccse/events/computing-showcase/fa24-cday-program.php

In the era of social media, sentiment analysis has emerged as a vital instrument for comprehending public opinion, especially on sites like LinkedIn and Twitter. Because user-generated content is informal and noisy, traditional sentiment classification techniques like Naive Bayes and Support Vector Machines sometimes find it difficult to capture context, sarcasm, and long-term interdependence. In order to improve sentiment analysis accuracy for social media datasets with a specific focus on sentiments related to corporate layoffs, this study suggests an encoder-only transformer model. Our method successfully captures intricate phrase patterns and contextual subtleties in textual data by leveraging the self-attention mechanism built into transformer designs. To assess the model’s performance on unseen data, we used the LinkedIn dataset for testing and the Twitter dataset for training. To help the model understand the semantic linkages in the text, we used Word2Vec for tokenization and representation. Our research indicates that the transformer model works noticeably better than conventional sentiment analysis methods, exhibiting enhanced resilience and flexibility when dealing with colloquial language and conflicting emotions. This development could have an impact on businesses and organizations looking to use social media insights on layoffs to make data-driven decisions.