updated README.md to include pip install command
This commit is contained in:
parent
3bee28119e
commit
4a443c18a4
|
@ -14,11 +14,12 @@ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a
|
|||
|
||||
## Usage
|
||||
|
||||
First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies. On a CUDA GPU machine, the following will do the trick:
|
||||
First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:
|
||||
|
||||
```bash
|
||||
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
|
||||
$ pip install ftfy regex tqdm
|
||||
$ pip install git+https://github.com/openai/CLIP.git
|
||||
```
|
||||
|
||||
Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.
|
||||
|
@ -51,7 +52,7 @@ The CLIP module `clip` provides the following methods:
|
|||
|
||||
#### `clip.available_models()`
|
||||
|
||||
Returns the name(s) of the available CLIP models.
|
||||
Returns the names of the available CLIP models.
|
||||
|
||||
#### `clip.load(name, device=..., jit=True)`
|
||||
|
||||
|
|
Loading…
Reference in New Issue