Upload README.md
Browse fileschange the requirement for the version of `transformers`.
README.md
CHANGED
@@ -52,7 +52,7 @@ We will provide a detailed guide later on how to modify your `modeling_xxx.py` f
|
|
52 |
### 2.2 Quick Start
|
53 |
|
54 |
#### 2.2.1 Environment Setup
|
55 |
-
You need to install `transformers>=4.53`, and we recommend using `lm_eval>=0.4.9` for running evaluations. We suggest managing your Python environment with `conda` for better dependency control.
|
56 |
|
57 |
```bash
|
58 |
conda create -n sepcache python=3.10
|
@@ -64,7 +64,7 @@ pip install lm_eval==0.4.9
|
|
64 |
You can use `SepCache` by specifying `custom_generate="transformers-community/sep_cache"` or `custom_generate="Gausson/sep_cache"` when calling the `generate` function. In our demo, we have already prepared sample monkey patching for the `Llama 3 series` models and provided some common parameters for initializing `SepCache`.
|
65 |
|
66 |
```python
|
67 |
-
# requires `transformers>=4.53.0`
|
68 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
69 |
|
70 |
# Preparing model, tokenizer, and model inputs
|
|
|
52 |
### 2.2 Quick Start
|
53 |
|
54 |
#### 2.2.1 Environment Setup
|
55 |
+
You need to install `transformers>=4.53.0,<4.54.0`, and we recommend using `lm_eval>=0.4.9` for running evaluations. We suggest managing your Python environment with `conda` for better dependency control.
|
56 |
|
57 |
```bash
|
58 |
conda create -n sepcache python=3.10
|
|
|
64 |
You can use `SepCache` by specifying `custom_generate="transformers-community/sep_cache"` or `custom_generate="Gausson/sep_cache"` when calling the `generate` function. In our demo, we have already prepared sample monkey patching for the `Llama 3 series` models and provided some common parameters for initializing `SepCache`.
|
65 |
|
66 |
```python
|
67 |
+
# requires `transformers>=4.53.0,<4.54.0`
|
68 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
69 |
|
70 |
# Preparing model, tokenizer, and model inputs
|