CLIP_on_Tesla_K20Xm/projects/styleGAN2
2021-03-29 00:00:32 -04:00
..
__init__.py Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
clip_classifier_utils.py Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
ganalyze_common_utils.py Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
ganalyze_transformations.py Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
ganalyze_with_clip.py Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
generation_demo.ipynb Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00
README.md Add end to end steering + clip pipeline with styleGAN2 2021-03-29 00:00:32 -04:00

  1. Install deps to run clip and stylegan2-ada-pytorch in a python virtual env.
  2. Install deps from stylegan2-ada-pytorch repo. Major ones are pytorch >= 1.7 and CUDA >= 11.0
  3. Download ffhq-pretrained stylegan2 model from the above repo.
  4. Use the virtual environment from above and Run through generation_demo.ipynb - this code samples images from a styleGAN2 network and scores them using CLIP
  5. ganalyze_with_clip.py is the main code that runs the steering pipeline with a generative model and CLIP. Change output paths and model paths from within the code.