Location
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
Document Type
Event
Start Date
22-4-2026 4:00 PM
Description
While Virtual Reality (VR) offers immersive educational opportunities, its pedagogical success relies heavily on a genuine sense of "instructor presence". This project presents a hybrid pipeline that automatically refines presenter 3D avatar gestures using semantic AI. Our non-VR recording system captures high-fidelity facial tracking and MediaPipe for upper-body pose estimation via standard RGB video. For emotion recognition, a local Large Language Model analyzes audio transcripts to generate a timestamped emphasis track. This semantic engine, intelligently exaggerating gestures during critical lecture moments. The captured motion and AI-enhanced gestures are synthesized and replayed on a virtual lecturer within an VR environment for evaluation.
Included in
GC-119-134 Pipeline for VR Embodied Lecture Authoring and AI Gesture Refinement
https://www.kennesaw.edu/ccse/events/computing-showcase/sp26-cday-program.php
While Virtual Reality (VR) offers immersive educational opportunities, its pedagogical success relies heavily on a genuine sense of "instructor presence". This project presents a hybrid pipeline that automatically refines presenter 3D avatar gestures using semantic AI. Our non-VR recording system captures high-fidelity facial tracking and MediaPipe for upper-body pose estimation via standard RGB video. For emotion recognition, a local Large Language Model analyzes audio transcripts to generate a timestamped emphasis track. This semantic engine, intelligently exaggerating gestures during critical lecture moments. The captured motion and AI-enhanced gestures are synthesized and replayed on a virtual lecturer within an VR environment for evaluation.