Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
Document Type
Event
Start Date
15-4-2025 4:00 PM
Description
Misinformation spreads fast, and 60% of consumers now question media reliability (Redline Digital, 2023). Manual verification is slow, and most systems still rely on binary real/fake classification, which overlooks nuanced types of misinformation. We propose a multi-class deep learning approach using a fine-tuned BERT model and a custom BiLSTM with attention to better detect categories like satire, conspiracy, and bias. Our models were trained on a balanced subset of the Fake News Corpus using nine distinct misinformation classes. By addressing both class imbalance and linguistic ambiguity, this system enhances contextual understanding and improves detection across varied news content. Our approach demonstrates that scalable, multi-class classification provides a more accurate and insightful solution to misinformation detection.
Included in
GRM-050 Context-Aware Misinformation Detection Using Fine-Tuned BERT and BiLSTM with Attention
https://www.kennesaw.edu/ccse/events/computing-showcase/sp25-cday-program.php
Misinformation spreads fast, and 60% of consumers now question media reliability (Redline Digital, 2023). Manual verification is slow, and most systems still rely on binary real/fake classification, which overlooks nuanced types of misinformation. We propose a multi-class deep learning approach using a fine-tuned BERT model and a custom BiLSTM with attention to better detect categories like satire, conspiracy, and bias. Our models were trained on a balanced subset of the Fake News Corpus using nine distinct misinformation classes. By addressing both class imbalance and linguistic ambiguity, this system enhances contextual understanding and improves detection across varied news content. Our approach demonstrates that scalable, multi-class classification provides a more accurate and insightful solution to misinformation detection.