xianbao commited on
Commit
210dfd0
·
verified ·
1 Parent(s): 6bb6b51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -2
README.md CHANGED
@@ -6,7 +6,20 @@ extra_gated_fields:
6
  I agree to use this model for non-commercial use ONLY: checkbox
7
  ---
8
 
9
- <details style="display: inline;"><summary> If you're interested in Yi's adoption of LLaMA architecture and license usage policy, see <span style="color: green;">Yi's relation with LLaMA.</span> ⬇️</summary> <ul> <br>
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  > 💡 TL;DR
11
  >
12
  > The Yi series models adopt the same model architecture as LLaMA but are **NOT** derivatives of LLaMA.
@@ -24,4 +37,3 @@ extra_gated_fields:
24
  - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing LLaMA on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
25
  </ul>
26
  </details>
27
-
 
6
  I agree to use this model for non-commercial use ONLY: checkbox
7
  ---
8
 
9
+
10
+ ## 📌 Introduction
11
+
12
+ - 🤖 The Yi series models are the next generation of open-source large language models trained from scratch by [01.AI](https://01.ai/).
13
+
14
+ - 🙌 Targeted as a bilingual language model and trained on 3T multilingual corpus, the Yi series models become one of the strongest LLM worldwide, showing promise in language understanding, commonsense reasoning, reading comprehension, and more. For example,
15
+
16
+ - For English language capability, the Yi series models ranked 2nd (just behind GPT-4), outperforming other LLMs (such as LLaMA2-chat-70B, Claude 2, and ChatGPT) on the [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpaca_eval/) in Dec 2023.
17
+
18
+ - For Chinese language capability, the Yi series models landed in 2nd place (following GPT-4), surpassing other LLMs (such as Baidu ERNIE, Qwen, and Baichuan) on the [SuperCLUE](https://www.superclueai.com/) in Oct 2023.
19
+
20
+ - 🙏 (Credits to LLaMA) Thanks to the Transformer and LLaMA open-source communities, as they reducing the efforts required to build from scratch and enabling the utilization of the same tools within the AI ecosystem.
21
+
22
+ <details style="display: inline;"><summary> If you're interested in Yi's adoption of LLaMA architecture and license usage policy, see <span style="color: green;">Yi's relation with LLaMA.</span> ⬇️</summary> <ul> <br>
23
  > 💡 TL;DR
24
  >
25
  > The Yi series models adopt the same model architecture as LLaMA but are **NOT** derivatives of LLaMA.
 
37
  - Developing in a unique and proprietary way, Yi has independently created its own high-quality training datasets, efficient training pipelines, and robust training infrastructure entirely from the ground up. This effort has led to excellent performance with Yi series models ranking just behind GPT4 and surpassing LLaMA on the [Alpaca Leaderboard in Dec 2023](https://tatsu-lab.github.io/alpaca_eval/).
38
  </ul>
39
  </details>