diff --git a/README.md b/README.md index 0a9cb4f..bd1a971 100644 --- a/README.md +++ b/README.md @@ -14,11 +14,12 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a ## Usage -First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies. On a CUDA GPU machine, the following will do the trick: +First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick: ```bash $ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0 $ pip install ftfy regex tqdm +$ pip install git+https://github.com/openai/CLIP.git ``` Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU. @@ -51,7 +52,7 @@ The CLIP module `clip` provides the following methods: #### `clip.available_models()` -Returns the name(s) of the available CLIP models. +Returns the names of the available CLIP models. #### `clip.load(name, device=..., jit=True)`