soksof commited on
Commit
178b23a
1 Parent(s): 541a737

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -14,6 +14,7 @@ pipeline_tag: text-generation
14
  We introduce Meltemi, the first Greek Large Language Model (LLM) trained by the Institute for Language and Speech Processing at Athena Research & Innovation Center.
15
  Meltemi is built on top of [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1), extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi-7B-Instruct-v1, an instruct fine-tuned version of Meltemi-7B-v1.
16
 
 
17
  # Model Information
18
 
19
  - Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
@@ -26,6 +27,15 @@ Meltemi is built on top of [Mistral-7b](https://huggingface.co/mistralai/Mistral
26
  - Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)
27
 
28
 
 
 
 
 
 
 
 
 
 
29
  # Evaluation
30
 
31
  The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
@@ -43,10 +53,11 @@ Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent wit
43
  | Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
44
 
45
 
46
- # Acknowledgements
47
 
48
- The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
49
 
50
- # Ethical Considerations
51
 
52
- This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
 
 
 
14
  We introduce Meltemi, the first Greek Large Language Model (LLM) trained by the Institute for Language and Speech Processing at Athena Research & Innovation Center.
15
  Meltemi is built on top of [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1), extending its capabilities for Greek through continual pretraining on a large corpus of high-quality and locally relevant Greek texts. We present Meltemi-7B-Instruct-v1, an instruct fine-tuned version of Meltemi-7B-v1.
16
 
17
+
18
  # Model Information
19
 
20
  - Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
 
27
  - Our SFT procedure is based on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)
28
 
29
 
30
+ # Instruction format
31
+ The prompt should be surrounded by [INST] and [/INST] tokens:
32
+
33
+ ```
34
+ text = "[INST] Πες μου αν έχεις συνείδηση. [/INST]"
35
+ "Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της."
36
+ "[INST] Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη; [/INST]"
37
+ ```
38
+
39
  # Evaluation
40
 
41
  The evaluation suite we created includes 6 test sets. The suite is integrated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness).
 
53
  | Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
54
 
55
 
56
+ # Ethical Considerations
57
 
58
+ This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
59
 
 
60
 
61
+ # Acknowledgements
62
+
63
+ The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.