Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,7 @@
|
|
1 |
---
|
2 |
-
base_model:
|
3 |
-
|
|
|
4 |
license: mit
|
5 |
language:
|
6 |
- en
|
@@ -15,12 +16,12 @@ language:
|
|
15 |
- km
|
16 |
- ta
|
17 |
---
|
18 |
-
# SEA-LION-7B-
|
19 |
|
20 |
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
|
21 |
The sizes of the models range from 3 billion to 7 billion parameters.
|
22 |
|
23 |
-
SEA-LION-7B-
|
24 |
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
|
25 |
|
26 |
SEA-LION stands for _Southeast Asian Languages In One Network_.
|
@@ -33,10 +34,10 @@ SEA-LION stands for _Southeast Asian Languages In One Network_.
|
|
33 |
|
34 |
## Model Details
|
35 |
### Base model
|
36 |
-
We performed instruction tuning in English and Indonesian on our [pre-trained SEA-LION-7B](https://huggingface.co/aisingapore/
|
37 |
|
38 |
### Benchmark Performance
|
39 |
-
We evaluated SEA-LION-7B-
|
40 |
|
41 |
BHASA stands out amongst other evaluations for SEA languages for its holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering), but also linguistic and cultural diagnostic tests which are meticulously handcrafted.
|
42 |
|
@@ -44,8 +45,8 @@ The evaluation was done zero-shot with Indonesian prompts and only a sample of 1
|
|
44 |
|
45 |
| Model | QA (F1) | Sentiment (F1) | Toxicity (F1) | Eng>Indo (ChrF++) | Indo>Eng (ChrF++) | Summary (ROUGE-L) | NLI (Acc) | Causal (Acc) |
|
46 |
|--------------------------------|---------|----------------|---------------|-------------------|-------------------|-------------------|-----------|--------------|
|
47 |
-
| SEA-LION-7B-
|
48 |
-
| SEA-LION-7B-
|
49 |
| SeaLLM 7B v1 | 30.96 | 56.29 | 22.60 | 62.23 | 41.55 | 14.03 | 26.50 | 56.60 |
|
50 |
| SeaLLM 7B v2 | 44.40 | 80.13 | **55.24** | 64.01 | **63.28** | 17.31 | 43.60 | 82.00 |
|
51 |
| Sailor-7B (Base) | 65.43 | 59.48 | 20.48 | **64.27** | 60.68 | 8.69 | 15.10 | 38.40 |
|
@@ -59,14 +60,14 @@ The evaluation was done zero-shot with Indonesian prompts and only a sample of 1
|
|
59 |
- For Natural Language Reasoning (NLR) tasks, we tested the model on Natural Language Inference (`NLI`) using the IndoNLI lay dataset and on Causal Reasoning (`Causal`) using the XCOPA dataset. The metrics are based on accuracy for both tasks.
|
60 |
|
61 |
### Usage
|
62 |
-
SEA-LION can be run using the 🤗 Transformers library
|
63 |
```python
|
64 |
# Please use transformers==4.37.2
|
65 |
|
66 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
67 |
|
68 |
-
tokenizer = AutoTokenizer.from_pretrained("aisingapore/
|
69 |
-
model = AutoModelForCausalLM.from_pretrained("aisingapore/
|
70 |
|
71 |
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
|
72 |
prompt = """Apa sentimen dari kalimat berikut ini?
|
@@ -93,15 +94,15 @@ Current SEA-LION models, including this commercially permissive release, have no
|
|
93 |
|
94 |
### Commercially Non-Permissive and Commercially Permissive SEA-LION Releases
|
95 |
|
96 |
-
The previous release of the commercially non-permissive SEA-LION-
|
97 |
|
98 |
|
99 |
## Technical Specifications
|
100 |
### Fine-Tuning Details
|
101 |
-
The SEA-LION-7B-
|
102 |
|
103 |
## Data
|
104 |
-
SEA-LION-7B-
|
105 |
|
106 |
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
|
107 |
|
|
|
1 |
---
|
2 |
+
base_model:
|
3 |
+
- aisingapore/SEA-LION-v1-7B
|
4 |
+
new_version: aisingapore/Gemma-SEA-LION-v3-9B-IT
|
5 |
license: mit
|
6 |
language:
|
7 |
- en
|
|
|
16 |
- km
|
17 |
- ta
|
18 |
---
|
19 |
+
# SEA-LION-v1-7B-IT
|
20 |
|
21 |
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
|
22 |
The sizes of the models range from 3 billion to 7 billion parameters.
|
23 |
|
24 |
+
SEA-LION-v1-7B-IT is a multilingual model which has been fine-tuned with **thousands of English and Indonesian instruction-completion pairs** alongside a smaller pool of instruction-completion pairs from other ASEAN languages.
|
25 |
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
|
26 |
|
27 |
SEA-LION stands for _Southeast Asian Languages In One Network_.
|
|
|
34 |
|
35 |
## Model Details
|
36 |
### Base model
|
37 |
+
We performed instruction tuning in English and Indonesian on our [pre-trained SEA-LION-v1-7B](https://huggingface.co/aisingapore/SEA-LION-v1-7B), a decoder model using the MPT architecture, to create SEA-LION-v1-7B-IT.
|
38 |
|
39 |
### Benchmark Performance
|
40 |
+
We evaluated SEA-LION-v1-7B-IT on the BHASA benchmark ([arXiv](https://arxiv.org/abs/2309.06085v2) and [GitHub](https://github.com/aisingapore/bhasa)) across a variety of tasks.
|
41 |
|
42 |
BHASA stands out amongst other evaluations for SEA languages for its holistic approach to evaluation, including not just traditional Natural Language Processing (NLP) benchmarking tasks (such as sentiment analysis and question answering), but also linguistic and cultural diagnostic tests which are meticulously handcrafted.
|
43 |
|
|
|
45 |
|
46 |
| Model | QA (F1) | Sentiment (F1) | Toxicity (F1) | Eng>Indo (ChrF++) | Indo>Eng (ChrF++) | Summary (ROUGE-L) | NLI (Acc) | Causal (Acc) |
|
47 |
|--------------------------------|---------|----------------|---------------|-------------------|-------------------|-------------------|-----------|--------------|
|
48 |
+
| SEA-LION-v1-7B-IT-Research | 24.86 | 76.13 | 24.45 | 52.50 | 46.82 | 15.44 | 33.20 | 23.80 |
|
49 |
+
| SEA-LION-v1-7B-IT | **68.41**| **91.45** | 17.98 | 57.48 | 58.04 | **17.54** | 53.10 | 60.80 |
|
50 |
| SeaLLM 7B v1 | 30.96 | 56.29 | 22.60 | 62.23 | 41.55 | 14.03 | 26.50 | 56.60 |
|
51 |
| SeaLLM 7B v2 | 44.40 | 80.13 | **55.24** | 64.01 | **63.28** | 17.31 | 43.60 | 82.00 |
|
52 |
| Sailor-7B (Base) | 65.43 | 59.48 | 20.48 | **64.27** | 60.68 | 8.69 | 15.10 | 38.40 |
|
|
|
60 |
- For Natural Language Reasoning (NLR) tasks, we tested the model on Natural Language Inference (`NLI`) using the IndoNLI lay dataset and on Causal Reasoning (`Causal`) using the XCOPA dataset. The metrics are based on accuracy for both tasks.
|
61 |
|
62 |
### Usage
|
63 |
+
SEA-LION-v1-7B-IT can be run using the 🤗 Transformers library
|
64 |
```python
|
65 |
# Please use transformers==4.37.2
|
66 |
|
67 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
68 |
|
69 |
+
tokenizer = AutoTokenizer.from_pretrained("aisingapore/SEA-LION-v1-7B-IT", trust_remote_code=True)
|
70 |
+
model = AutoModelForCausalLM.from_pretrained("aisingapore/SEA-LION-v1-7B-IT", trust_remote_code=True)
|
71 |
|
72 |
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
|
73 |
prompt = """Apa sentimen dari kalimat berikut ini?
|
|
|
94 |
|
95 |
### Commercially Non-Permissive and Commercially Permissive SEA-LION Releases
|
96 |
|
97 |
+
The previous release of the commercially non-permissive SEA-LION-v1-7B-IT-Research enabled us to explore the full research potential of SEA-LION when allowed to take full advantage of what is publicly available. In contrast, in building the commercially permissive SEA-LION-v1-7B-IT, we had to leave out high-quality instruction data that was either proprietary, restricted by non-commercial licenses or in a legal gray area, leaving us with a much smaller proportion of commercially permissive data to work with — a problem that is even more pronounced for low-resource languages. We thus hope this will sound a call to action for more initiatives to create commercially viable data in the region, enabling practical benefits for all.
|
98 |
|
99 |
|
100 |
## Technical Specifications
|
101 |
### Fine-Tuning Details
|
102 |
+
The SEA-LION-v1-7B-IT was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
|
103 |
|
104 |
## Data
|
105 |
+
SEA-LION-v1-7B-IT was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of a high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
|
106 |
|
107 |
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
|
108 |
|