jmprcp commited on
Commit
2167bd3
·
verified ·
1 Parent(s): c100bcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -54
README.md CHANGED
@@ -1,65 +1,20 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
  base_model: Qwen/Qwen2.5-7B-Instruct
5
- tags:
6
- - alignment-handbook
7
- - generated_from_trainer
8
- model-index:
9
- - name: trained_prometheus
10
- results: []
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
  should probably proofread and complete it, then remove this comment. -->
15
 
16
- # trained_prometheus
17
 
18
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: nan
21
 
22
- ## Model description
23
 
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
-
30
- ## Training and evaluation data
31
-
32
- More information needed
33
-
34
- ## Training procedure
35
-
36
- ### Training hyperparameters
37
-
38
- The following hyperparameters were used during training:
39
- - learning_rate: 1e-06
40
- - train_batch_size: 2
41
- - eval_batch_size: 1
42
- - seed: 42
43
- - distributed_type: multi-GPU
44
- - num_devices: 8
45
- - gradient_accumulation_steps: 2
46
- - total_train_batch_size: 32
47
- - total_eval_batch_size: 8
48
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.1
51
- - num_epochs: 1.0
52
-
53
- ### Training results
54
-
55
- | Training Loss | Epoch | Step | Validation Loss |
56
- |:-------------:|:------:|:-----:|:---------------:|
57
- | 1.8053 | 1.0000 | 14840 | nan |
58
-
59
-
60
- ### Framework versions
61
-
62
- - Transformers 4.46.0
63
- - Pytorch 2.1.2+cu121
64
- - Datasets 3.3.1
65
- - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
3
+ license: other
4
  base_model: Qwen/Qwen2.5-7B-Instruct
 
 
 
 
 
 
5
  ---
6
 
7
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
8
  should probably proofread and complete it, then remove this comment. -->
9
 
10
+ # M-Prometheus
11
 
12
+ M-Prometheus is a suite of open LLM judges that can natively evaluate multilingual outputs. They were trained on 480k instances of multilingual direct assessment and pairwise comparison data wiht long-form feedback.
13
+ They can be prompted in the same way as [Prometheus-2](https://huggingface.co/prometheus-eval/prometheus-7b-v2.0/tree/main).
14
+ Check out our [paper](wip) for more details.
15
 
16
+ ## Citation
17
 
18
+ ```bibtex
19
+ wip
20
+ ```