CLIP/clip.py at main · openai/CLIP · GitHub
github.com › openai › CLIPNov 08, 2021 · Latest commit 573315e on Nov 8, 2021 History. use `pkg_resources` from `setuptools` to parse version strings, it is required by Pytorch >= 0.4.1 anyway. 8 contributors. Users who have contributed to this file. 229 lines (175 sloc) 8.35 KB. Raw Blame. Open with Desktop. View raw. View blame.
GitHub - openai/CLIP: Contrastive Language-Image Pretraining
github.com › openai › CLIPNov 09, 2021 · import torch import clip from PIL import Image device = "cuda" if torch. cuda. is_available else "cpu" model, preprocess = clip. load ("ViT-B/32", device = device) image = preprocess (Image. open ("CLIP.png")). unsqueeze (0). to (device) text = clip. tokenize (["a diagram", "a dog", "a cat"]). to (device) with torch. no_grad (): image_features = model. encode_image (image) text_features = model. encode_text (text) logits_per_image, logits_per_text = model (image, text) probs = logits_per ...