UR-60 Video-to-Video Synthesis With Semantically Segmented Video
Location
https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php
Document Type
Event
Start Date
26-4-2021 5:00 PM
Description
Our project involves studying the usage of generative adversarial networks (GANs) to translate semantically segmented video to photo-realistic video in a process known as video-to-video synthesis. The model is able to learn a mapping from semantically segmented masks to real-life images which depict the corresponding semantic labels. To achieve this, we employ a conditional GAN-based learning method that produces output conditionally based on the source video to be translated. Our model is capable of synthesizing a translated video, given semantically labeled video, that resembles real video by accurately replicating low-frequency details from the source.Advisors(s): Dr. Mohammed AledhariTopic(s): Artificial IntelligenceCS 4732
UR-60 Video-to-Video Synthesis With Semantically Segmented Video
https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php
Our project involves studying the usage of generative adversarial networks (GANs) to translate semantically segmented video to photo-realistic video in a process known as video-to-video synthesis. The model is able to learn a mapping from semantically segmented masks to real-life images which depict the corresponding semantic labels. To achieve this, we employ a conditional GAN-based learning method that produces output conditionally based on the source video to be translated. Our model is capable of synthesizing a translated video, given semantically labeled video, that resembles real video by accurately replicating low-frequency details from the source.Advisors(s): Dr. Mohammed AledhariTopic(s): Artificial IntelligenceCS 4732