Update README.md
Browse files
README.md
CHANGED
|
@@ -7,52 +7,63 @@ datasets:
|
|
| 7 |
- togethercomputer/llama-instruct
|
| 8 |
---
|
| 9 |
|
| 10 |
-
#
|
| 11 |
|
| 12 |
## Model Description
|
| 13 |
|
| 14 |
-
|
| 15 |
-
We built
|
| 16 |
-
We hope that this can enable everyone to finetune their own version of [
|
| 17 |
|
| 18 |
## Data Collection Details
|
| 19 |
|
| 20 |
-
|
| 21 |
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
|
| 22 |
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
|
| 23 |
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
| 24 |
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
| 25 |
|
| 26 |
2. **Long-context Summarization and Long-context QA**.
|
| 27 |
-
We follow the recipe of [
|
| 28 |
|
| 29 |
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
|
| 30 |
|
| 31 |
## Model Usage
|
| 32 |
|
| 33 |
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
```python
|
| 37 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 38 |
|
| 39 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/
|
| 40 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/
|
| 41 |
input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
|
| 42 |
-
output = model.generate(input_ids, max_length
|
| 43 |
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
| 44 |
```
|
| 45 |
|
| 46 |
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
|
| 47 |
|
| 48 |
```
|
| 49 |
-
[INST] <your instruction here> [\INST]
|
| 50 |
```
|
| 51 |
|
| 52 |
For example, if we query the model with
|
| 53 |
|
| 54 |
```
|
| 55 |
-
[INST] Write a poem about cats [\INST]
|
| 56 |
```
|
| 57 |
|
| 58 |
the model will return
|
|
@@ -82,33 +93,33 @@ Their charm, a gift, that's forever told.
|
|
| 82 |
|
| 83 |
## Model Evaluation
|
| 84 |
|
| 85 |
-
We evaluate the model from three aspects: 1) [Normalized perplexity](https://together.ai/blog/
|
| 86 |
-
2) [Rouge score over BookSum](https://together.ai/blog/
|
| 87 |
-
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/
|
| 88 |
|
| 89 |
* Normalized Perplexity over PG19
|
| 90 |
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
|
| 91 |
| -------- | ------- | ------- | ------- | ------- | ------- |
|
| 92 |
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
|
| 93 |
-
|
|
| 94 |
|
| 95 |
* Rouge Score over BookSum
|
| 96 |
| Model | R1 | R2 | RL |
|
| 97 |
| -------- | ------- | ------- | ------- |
|
| 98 |
| LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
|
| 99 |
-
|
|
| 100 |
|
| 101 |
* Accuracy over MQA
|
| 102 |
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
|
| 103 |
| -------- | ------- | ------- | ------- |
|
| 104 |
| LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
|
| 105 |
-
|
|
| 106 |
|
| 107 |
-
We observe that
|
| 108 |
|
| 109 |
## Limitations and Bias
|
| 110 |
|
| 111 |
-
As with all language models,
|
| 112 |
|
| 113 |
## Community
|
| 114 |
|
|
|
|
| 7 |
- togethercomputer/llama-instruct
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# Llama-2-7B-32K-Instruct
|
| 11 |
|
| 12 |
## Model Description
|
| 13 |
|
| 14 |
+
Llama-2-7B-32K-Instruct is an open-source, long-context chat model finetuned from [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K), over high-quality instruction and chat data.
|
| 15 |
+
We built Llama-2-7B-32K-Instruct with less than 200 lines of Python script using [Together API](https://together.ai/blog/api-announcement), and we also make the [recipe fully available](https://github.com/togethercomputer/LLaMA-2-32K-Instruct).
|
| 16 |
+
We hope that this can enable everyone to finetune their own version of [Llama-2-7B-32K](https://huggingface.co/togethercomputer/Llama-2-7B-32K) — play with [Together API](https://together.ai/blog/api-announcement) and give us feedback!
|
| 17 |
|
| 18 |
## Data Collection Details
|
| 19 |
|
| 20 |
+
Llama-2-7B-32K-Instruct is fine-tuned over a combination of two parts:
|
| 21 |
1. **19K single- and multi-round conversations generated by human instructions and [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) outputs**.
|
| 22 |
We collected the dataset following the distillation paradigm that is used by Alpaca, Vicuna, WizardLM, Orca — producing instructions by querying a powerful LLM (in this case, [Llama-2-70B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)).
|
| 23 |
The complete dataset is also released [here](https://huggingface.co/datasets/togethercomputer/llama-instruct).
|
| 24 |
We also share the complete recipe for the data collection process [here](https://github.com/togethercomputer/LLaMA-2-32K-Chat).
|
| 25 |
|
| 26 |
2. **Long-context Summarization and Long-context QA**.
|
| 27 |
+
We follow the recipe of [Llama-2-7B-32K](https://together.ai/blog/Llama-2-7B-32K), and train our model with the [BookSum dataset](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) and [Multi-document Question Answering](https://arxiv.org/abs/2307.03172).
|
| 28 |
|
| 29 |
The final data mixture used for model finetuning is: 19K instruction (50%) + BookSum (25%) + MQA (25%).
|
| 30 |
|
| 31 |
## Model Usage
|
| 32 |
|
| 33 |
We encourage you to try out this model using the [Together API](https://together.ai/blog/api-announcement). The updated inference stack allows for efficient inference.
|
| 34 |
+
|
| 35 |
+
To run the model locally, we strongly recommend to install Flash Attention V2, which is necessary to obtain the best performance:
|
| 36 |
+
```
|
| 37 |
+
# Please update the path of `CUDA_HOME`
|
| 38 |
+
export CUDA_HOME=/usr/local/cuda-11.8
|
| 39 |
+
pip install transformers==4.31.0
|
| 40 |
+
pip install sentencepiece
|
| 41 |
+
pip install ninja
|
| 42 |
+
pip install flash-attn --no-build-isolation
|
| 43 |
+
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary
|
| 44 |
+
```
|
| 45 |
+
You can load the model directly from the Hugging Face model hub using
|
| 46 |
|
| 47 |
```python
|
| 48 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 49 |
|
| 50 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct")
|
| 51 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/Llama-2-7B-32K-Instruct", trust_remote_code=True, torch_dtype=torch.float16)
|
| 52 |
input_ids = tokenizer.encode(<your instruction>, return_tensors="pt")
|
| 53 |
+
output = model.generate(input_ids, max_length=128, temperature=0.7)
|
| 54 |
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
|
| 55 |
```
|
| 56 |
|
| 57 |
The model is also hosted on [Together Playground](https://api.together.xyz/playground). You can simply play with the model by using prompt formatted by:
|
| 58 |
|
| 59 |
```
|
| 60 |
+
[INST] <your instruction here> [\INST]\n\n
|
| 61 |
```
|
| 62 |
|
| 63 |
For example, if we query the model with
|
| 64 |
|
| 65 |
```
|
| 66 |
+
[INST] Write a poem about cats [\INST]\n\n
|
| 67 |
```
|
| 68 |
|
| 69 |
the model will return
|
|
|
|
| 93 |
|
| 94 |
## Model Evaluation
|
| 95 |
|
| 96 |
+
We evaluate the model from three aspects: 1) [Normalized perplexity](https://together.ai/blog/Llama-2-7B-32K) over [PG19 dataset](https://huggingface.co/datasets/pg19);
|
| 97 |
+
2) [Rouge score over BookSum](https://together.ai/blog/Llama-2-7B-32K); and
|
| 98 |
+
3) [Accuracy over Multi-document Question Answering (MQA)](https://together.ai/blog/Llama-2-7B-32K). We summarize the results below:
|
| 99 |
|
| 100 |
* Normalized Perplexity over PG19
|
| 101 |
| Model | 2K Seq | 4K Seq | 8K Seq | 16K Seq | 32K Seq |
|
| 102 |
| -------- | ------- | ------- | ------- | ------- | ------- |
|
| 103 |
| LLaMA-2-7B-Chat (Meta) | 1.844 | 1.833 | N/A | N/A | N/A |
|
| 104 |
+
| Llama-2-7B-32K-Instruct (ours) | 1.813 | 1.798 | 1.781 | 1.778 | 1.772|
|
| 105 |
|
| 106 |
* Rouge Score over BookSum
|
| 107 |
| Model | R1 | R2 | RL |
|
| 108 |
| -------- | ------- | ------- | ------- |
|
| 109 |
| LLaMA-2-7B-Chat (Meta) | 0.055 | 0.008 | 0.046 |
|
| 110 |
+
| Llama-2-7B-32K-Instruct (ours) | 0.365 | 0.086 | 0.192 |
|
| 111 |
|
| 112 |
* Accuracy over MQA
|
| 113 |
| Model | 20 docs (Avg 2.9K tokens) | 30 docs (Avg 4.4K tokens) | 50 docs (Avg 7.4K tokens) |
|
| 114 |
| -------- | ------- | ------- | ------- |
|
| 115 |
| LLaMA-2-7B-Chat (Meta) | 0.384 | 0.375 | 0.313 |
|
| 116 |
+
| Llama-2-7B-32K-Instruct (ours) | 0.451 | 0.434 | 0.373 |
|
| 117 |
|
| 118 |
+
We observe that Llama-2-7B-32K-Instruct obtains reasonable (and even better) perplexity, rouge score and accuracy over the original LLaMA-2-7B-Chat model.
|
| 119 |
|
| 120 |
## Limitations and Bias
|
| 121 |
|
| 122 |
+
As with all language models, Llama-2-7B-32K-Instruct may generate incorrect or biased content. It's important to keep this in mind when using the model.
|
| 123 |
|
| 124 |
## Community
|
| 125 |
|