VQGAN-CLIP Overview
awesomeopensource.com › project › nerdyrodentThis example uses Anaconda to manage virtual Python environments. Create a new virtual Python environment for VQGAN-CLIP: conda create --name vqgan python=3.9 conda activate vqgan. Install Pytorch in the new enviroment: Note: This installs the CUDA version of Pytorch, if you want to use an AMD graphics card, read the AMD section below.
VQGAN CLIP - Open Source Agenda
https://www.opensourceagenda.com/projects/vqgan-clipVQGAN-CLIP Overview. A repo for running VQGAN+CLIP locally. This started out as a Katherine Crowson VQGAN+CLIP derived Google colab notebook. Original notebook: Some example images: Environment: Tested on Ubuntu 20.04; GPU: Nvidia RTX 3090; Typical VRAM requirements: 24 GB for a 900x900 image; 10 GB for a 512x512 image; 8 GB for a 380x380 image
Introduction to VQGAN+CLIP - heystacks
heystacks.org › doc › 935For instance, setting display_frequency to 1 will display every iteration VQGAN makes in the Execution cell. Setting display_frequency to 33 will only show you the 1st, 33rd, 66th, 99th images, and so on. The next cell, “VQGAN+CLIP Parameters and Execution,” contains all the remaining parameters that are exclusive to VQGAN+CLIP.