This project demonstrates a Generative Adversarial Network (GAN)-based model that generates frames sequentially to create videos. The model takes an input image, generates an output frame, and then reuses the output frame as input for the next iteration to create a sequence of frames. The generated frames are then combined into a video.
The GAN generates one frame at a time, reusing the previous output as input for the next frame. newCombines generated frames into a complete video.
Extracts video frames and processes them for training and testing.
Converts a sentence into an image representation using Word2Vec embeddings.