Presenter Information

Location

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

Document Type

Event

Start Date

22-4-2026 4:00 PM

Description

NeuroVision presents a unified framework for mapping electroencephalography (EEG) signals to language and visual meaning. Extracting semantic information from EEG remains a fundamental challenge due to its low signal-to-noise ratio, high dimensionality, and inter-subject variability. To address these challenges, we propose a multimodal representation learning framework that aligns EEG signals with both textual and visual embeddings through temporal modeling, spatial brain-region decomposition, and contrastive learning. The framework integrates self-supervised pretraining with supervised multimodal alignment to learn robust and transferable representations. Experimental results demonstrate BLEU-1 of 0.1106 and ROUGE-1 of 0.1493 for EEG-to-text generation, alongside a 52% improvement in retrieval performance (R@1: 1.38%) and high cross-modal consistency (0.9998). These results provide strong evidence that EEG signals encode modality-independent semantic structure, advancing general brain-to-meaning decoding and enabling scalable, next-generation brain–computer interface systems.

Share

COinS
 
Apr 22nd, 4:00 PM

GRM-134-128 NeuroVision: Mapping Brain Signals to Language and Visual Meaning

https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php

NeuroVision presents a unified framework for mapping electroencephalography (EEG) signals to language and visual meaning. Extracting semantic information from EEG remains a fundamental challenge due to its low signal-to-noise ratio, high dimensionality, and inter-subject variability. To address these challenges, we propose a multimodal representation learning framework that aligns EEG signals with both textual and visual embeddings through temporal modeling, spatial brain-region decomposition, and contrastive learning. The framework integrates self-supervised pretraining with supervised multimodal alignment to learn robust and transferable representations. Experimental results demonstrate BLEU-1 of 0.1106 and ROUGE-1 of 0.1493 for EEG-to-text generation, alongside a 52% improvement in retrieval performance (R@1: 1.38%) and high cross-modal consistency (0.9998). These results provide strong evidence that EEG signals encode modality-independent semantic structure, advancing general brain-to-meaning decoding and enabling scalable, next-generation brain–computer interface systems.