lbourdois commited on
Commit
cc48847
·
verified ·
1 Parent(s): 317b7eb

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +52 -40
README.md CHANGED
@@ -1,40 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- license_link: https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
4
- language:
5
- - en
6
- pipeline_tag: text-generation
7
- base_model: Qwen/Qwen2.5-0.5B
8
- tags:
9
- - chat
10
- - rl-swarm
11
- - gensyn
12
- library_name: transformers
13
- ---
14
-
15
- # Qwen2.5-0.5B-Instruct
16
-
17
- ## Introduction
18
- This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm), to finetune locally using peer-to-peer reinforcement learning post-training.
19
-
20
- Once finetuned, the model can be used as normal in any workflow, for details on how to do this please refer to the [original model documentation](https://qwen.readthedocs.io/en/latest/).
21
-
22
- For more details on the original model, please refer to the original repository [here](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
23
-
24
- This repo contains an **unmodified version** of the instruction-tuned 0.5B Qwen2.5 model, which has the following features:
25
- - Type: Causal Language Models
26
- - Training Stage: Pretraining & Post-training
27
- - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
28
- - Number of Parameters: 0.49B
29
- - Number of Paramaters (Non-Embedding): 0.36B
30
- - Number of Layers: 24
31
- - Number of Attention Heads (GQA): 14 for Q and 2 for KV
32
- - Context Length: Full 32,768 tokens and generation 8192 tokens
33
-
34
- ## Requirements
35
-
36
- This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm) system, for details on model requirements when using outside of a swarm, refer to the original Qwen repo [here](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
37
-
38
- ## Quickstart
39
-
40
- To deploy this model into a swarm and/or participate in the Gensyn Testnet, follow the instructions in the [RL Swarm repository](https://github.com/gensyn-ai/rl-swarm), read about the [testnet](https://www.gensyn.ai/testnet), read the [RL Swarm overview](https://www.gensyn.ai/articles/rl-swarm), and/or read the [RL Swarm technical report](https://github.com/gensyn-ai/paper-rl-swarm/blob/main/latest.pdf).
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
4
+ language:
5
+ - zho
6
+ - eng
7
+ - fra
8
+ - spa
9
+ - por
10
+ - deu
11
+ - ita
12
+ - rus
13
+ - jpn
14
+ - kor
15
+ - vie
16
+ - tha
17
+ - ara
18
+ pipeline_tag: text-generation
19
+ base_model: Qwen/Qwen2.5-0.5B
20
+ tags:
21
+ - chat
22
+ - rl-swarm
23
+ - gensyn
24
+ library_name: transformers
25
+ ---
26
+
27
+ # Qwen2.5-0.5B-Instruct
28
+
29
+ ## Introduction
30
+ This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm), to finetune locally using peer-to-peer reinforcement learning post-training.
31
+
32
+ Once finetuned, the model can be used as normal in any workflow, for details on how to do this please refer to the [original model documentation](https://qwen.readthedocs.io/en/latest/).
33
+
34
+ For more details on the original model, please refer to the original repository [here](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
35
+
36
+ This repo contains an **unmodified version** of the instruction-tuned 0.5B Qwen2.5 model, which has the following features:
37
+ - Type: Causal Language Models
38
+ - Training Stage: Pretraining & Post-training
39
+ - Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
40
+ - Number of Parameters: 0.49B
41
+ - Number of Paramaters (Non-Embedding): 0.36B
42
+ - Number of Layers: 24
43
+ - Number of Attention Heads (GQA): 14 for Q and 2 for KV
44
+ - Context Length: Full 32,768 tokens and generation 8192 tokens
45
+
46
+ ## Requirements
47
+
48
+ This model is intended for use in the [Gensyn RL Swarm](https://www.gensyn.ai/articles/rl-swarm) system, for details on model requirements when using outside of a swarm, refer to the original Qwen repo [here](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct).
49
+
50
+ ## Quickstart
51
+
52
+ To deploy this model into a swarm and/or participate in the Gensyn Testnet, follow the instructions in the [RL Swarm repository](https://github.com/gensyn-ai/rl-swarm), read about the [testnet](https://www.gensyn.ai/testnet), read the [RL Swarm overview](https://www.gensyn.ai/articles/rl-swarm), and/or read the [RL Swarm technical report](https://github.com/gensyn-ai/paper-rl-swarm/blob/main/latest.pdf).