UR-60 Video-to-Video Synthesis With Semantically Segmented Video

Presenter Information

Location

https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php

Streaming Media

Document Type

Event

Start Date

26-4-2021 5:00 PM

Description

Our project involves studying the usage of generative adversarial networks (GANs) to translate semantically segmented video to photo-realistic video in a process known as video-to-video synthesis. The model is able to learn a mapping from semantically segmented masks to real-life images which depict the corresponding semantic labels. To achieve this, we employ a conditional GAN-based learning method that produces output conditionally based on the source video to be translated. Our model is capable of synthesizing a translated video, given semantically labeled video, that resembles real video by accurately replicating low-frequency details from the source.
Advisors(s): Dr. Mohammed Aledhari
Topic(s): Artificial Intelligence
CS 4732

This document is currently not available here.

Share

COinS
 
Apr 26th, 5:00 PM

UR-60 Video-to-Video Synthesis With Semantically Segmented Video

https://ccse.kennesaw.edu/computing-showcase/cday-programs/spring2021program.php

Our project involves studying the usage of generative adversarial networks (GANs) to translate semantically segmented video to photo-realistic video in a process known as video-to-video synthesis. The model is able to learn a mapping from semantically segmented masks to real-life images which depict the corresponding semantic labels. To achieve this, we employ a conditional GAN-based learning method that produces output conditionally based on the source video to be translated. Our model is capable of synthesizing a translated video, given semantically labeled video, that resembles real video by accurately replicating low-frequency details from the source.
Advisors(s): Dr. Mohammed Aledhari
Topic(s): Artificial Intelligence
CS 4732