pszemraj commited on
Commit
27574a6
·
verified ·
1 Parent(s): 7e5a41b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -15
README.md CHANGED
@@ -2,8 +2,6 @@
2
  library_name: transformers
3
  license: mit
4
  base_model: roberta-base
5
- tags:
6
- - generated_from_trainer
7
  metrics:
8
  - accuracy
9
  model-index:
@@ -15,28 +13,18 @@ language:
15
  - en
16
  ---
17
 
18
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
- should probably proofread and complete it, then remove this comment. -->
20
 
21
- # roberta-base-unified-mcqa-v2
22
-
23
- This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
24
  It achieves the following results on the evaluation set:
25
  - Loss: 0.5534
26
  - Accuracy: 0.8030
27
  - Num Input Tokens Seen: 2785906024
28
 
29
- ## Model description
30
-
31
- More information needed
32
-
33
  ## Intended uses & limitations
34
 
35
- More information needed
36
-
37
- ## Training and evaluation data
38
 
39
- More information needed
40
 
41
  ## Training procedure
42
 
 
2
  library_name: transformers
3
  license: mit
4
  base_model: roberta-base
 
 
5
  metrics:
6
  - accuracy
7
  model-index:
 
13
  - en
14
  ---
15
 
16
+ # roberta-base-unified-mcqa: 4-choice
 
17
 
18
+ This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [unified-mcqa](https://huggingface.co/datasets/pszemraj/unified-mcqa) dataset (4 choice config).
 
 
19
  It achieves the following results on the evaluation set:
20
  - Loss: 0.5534
21
  - Accuracy: 0.8030
22
  - Num Input Tokens Seen: 2785906024
23
 
 
 
 
 
24
  ## Intended uses & limitations
25
 
26
+ goal is to see if training on general MCQ data helps A) GLUE evals B) results in a better base model than just the MLM output
 
 
27
 
 
28
 
29
  ## Training procedure
30