File size: 5,111 Bytes
05f158b 5a1b9d7 05f158b 5a1b9d7 5c3e84b 5a1b9d7 9b79252 9d01321 9b79252 10b1598 9b79252 15cf1c5 9b79252 15cf1c5 5a1b9d7 375989c 7184f8d 900f0a9 5a1b9d7 900f0a9 5a1b9d7 10b1598 df02b63 10b1598 9698e26 fafcfb9 8e3c3e4 10b1598 7ee3416 10b1598 eb325a7 5a1b9d7 b50eeac 5a1b9d7 eb325a7 5a1b9d7 4e9d671 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# **V0.3 IS UP**
[Link to V0.3](https://huggingface.co/maywell/Synatra-7B-v0.3-base)
# **Synatra-V0.1-7B**
Made by StableFluffy
[Visit my website! - Currently on consturction..](https://www.stablefluffy.kr/)
## License
This model is strictly [*non-commercial*](https://creativecommons.org/licenses/by-nc/4.0/) (**cc-by-nc-4.0**) use only which takes priority over the **LLAMA 2 COMMUNITY LICENSE AGREEMENT**.
The "Model" is completely free (ie. base model, derivates, merges/mixes) to use for non-commercial purposes as long as the the included **cc-by-nc-4.0** license in any parent repository, and the non-commercial use statute remains, regardless of other models' licences.
The licence can be changed after new model released.
## Model Details
**Base Model**
[mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
**Trained On**
A6000 48GB * 8
## Instruction format
**νμ΅ κ³Όμ μ μ€μλ‘ [/INST]κ° μλ [\INST]κ° μ μ©λμμ΅λλ€. v0.2 μμ μμ λ μμ μ
λλ€.**
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[\INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
Plus, It is strongly recommended to add a space at the end of the prompt.
E.g.
```
text = "<s>[INST] μμ΄μ λ΄ν΄μ μ
μ μ μλ €μ€. [\INST] "
```
# **Model Benchmark**
## KULLM Evaluation
ꡬλ¦v2 repo μμ μ 곡λλ λ°μ΄ν°μ
κ³Ό ν둬ννΈλ₯Ό μ¬μ©νμ¬ νκ°νμ΅λλ€.
λΉμ GPT4μ νμ¬ GPT4κ° μμ ν λμΌνμ§λ μκΈ°μ μ€μ κ²°κ³Όμ μ½κ°μ μ°¨μ΄κ° μ‘΄μ¬ ν μ μμ΅λλ€.

| Model | μ΄ν΄κ°λ₯μ± | μμ°μ€λ¬μ | λ§₯λ½μ μ§ | ν₯λ―Έλ‘μ | μ§μμ΄μ¬μ© | μ λ°μ ν리ν°
| --- | --- | --- | --- | --- | --- | ---
| GPT-3.5 | 0.980 | 2.806 | 2.849 | 2.056 | 0.917 | 3.905
| GPT-4 | 0.984 | 2.897 | 2.944 | 2.143 | 0.968 | 4.083
| KoAlpaca v1.1 | 0.651 | 1.909 | 1.901 | 1.583 | 0.385 | 2.575
| koVicuna | 0.460 | 1.583 | 1.726 | 1.528 | 0.409 | 2.440
| KULLM v2 | 0.742 | 2.083 | 2.107 | 1.794 | 0.548 | 3.036
| **Synatra-V0.1-7B** | **0.960** | **2.821** | **2.755** | **2.356** | **0.934** | **4.065**
## KOBEST_BOOLQ, SENTINEG, WIC - ZERO_SHOT
[EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot)λ₯Ό μ¬μ©νμ¬ BoolQ, SentiNeg, Wicμ μΈ‘μ νμ΅λλ€.
HellaSwagμ COPAλ μλ³Έμ½λλ₯Ό μμ νλ κ³Όμ μμ μ΄λ €μμ κ²ͺμ΄ μμ§ μ§ννμ§ μμμ΅λλ€.
### NOTE
BoolQμλ Instruction λͺ¨λΈμ μ΄ν΄λ₯Ό λκΈ°μν΄ "μ κΈμ λν μ§λ¬Έμ μ¬μ€μ νμΈνλ μμ
μ
λλ€.", "μ, μλμ€λ‘ λλ΅ν΄μ£ΌμΈμ."μ ν둬ννΈλ₯Ό μΆκ°νμ΅λλ€.
SentiNegμλ Instruction λͺ¨λΈμ μ΄ν΄λ₯Ό λκΈ°μν΄ "μ λ¬Έμ₯μ κΈμ , λΆμ μ¬λΆλ₯Ό νλ¨νμΈμ."μ ν둬ννΈλ₯Ό μΆκ°νμ΅λλ€.
Wicμ κ²½μ°λ [INST], [\INST]λ§ μΆκ°νμμ΅λλ€.
| Model | COPA | HellaSwag | BoolQ | SentiNeg | Wic
| --- | --- | --- | --- | --- | ---
| EleutherAI/polyglot-ko-12.8b | 0.7937 | 0.5954 | 0.4818 | 0.9117 | 0.3985
| **Synatra-V0.1-7B** | **NaN** | **NaN** | **0.849** | **0.8690** | **0.4881**
# **Implementation Code**
Since, chat_template already contains insturction format above.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-V0.1-7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-V0.1-7B")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
If you run it on oobabooga your prompt would look like this. - ** Need to add Space at the end! **
```
[INST] λ§μ»¨μ λν΄μ μλ €μ€. [\INST]
```
> Readme format: [beomi/llama-2-ko-7b](https://huggingface.co/beomi/llama-2-ko-7b)
---
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-V0.1-7B)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 53.54 |
| ARC (25-shot) | 55.29 |
| HellaSwag (10-shot) | 76.63 |
| MMLU (5-shot) | 55.29 |
| TruthfulQA (0-shot) | 55.76 |
| Winogrande (5-shot) | 72.77 |
| GSM8K (5-shot) | 19.41 |
| DROP (3-shot) | 39.63 |
|