azizdh00 commited on
Commit
9eede4f
·
verified ·
1 Parent(s): d7f1f01

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -20
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
  base_model: Qwen/Qwen3-0.6B-Base
5
  tags:
6
  - generated_from_trainer
@@ -14,21 +13,8 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # MNLP_M2_rag_model
16
 
17
- This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base) on the None dataset.
18
 
19
- ## Model description
20
-
21
- More information needed
22
-
23
- ## Intended uses & limitations
24
-
25
- More information needed
26
-
27
- ## Training and evaluation data
28
-
29
- More information needed
30
-
31
- ## Training procedure
32
 
33
  ### Training hyperparameters
34
 
@@ -45,13 +31,9 @@ The following hyperparameters were used during training:
45
  - lr_scheduler_warmup_steps: 500
46
  - num_epochs: 3
47
 
48
- ### Training results
49
-
50
-
51
-
52
  ### Framework versions
53
 
54
  - Transformers 4.52.3
55
  - Pytorch 2.7.0
56
  - Datasets 3.6.0
57
- - Tokenizers 0.21.1
 
1
  ---
2
  library_name: transformers
 
3
  base_model: Qwen/Qwen3-0.6B-Base
4
  tags:
5
  - generated_from_trainer
 
13
 
14
  # MNLP_M2_rag_model
15
 
16
+ This model is a fine-tuned version of [Qwen/Qwen3-0.6B-Base](https://huggingface.co/Qwen/Qwen3-0.6B-Base)
17
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ### Training hyperparameters
20
 
 
31
  - lr_scheduler_warmup_steps: 500
32
  - num_epochs: 3
33
 
 
 
 
 
34
  ### Framework versions
35
 
36
  - Transformers 4.52.3
37
  - Pytorch 2.7.0
38
  - Datasets 3.6.0
39
+ - Tokenizers 0.21.1