asadfgglie commited on
Commit
b06b14f
·
verified ·
1 Parent(s): 103c71e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -1
README.md CHANGED
@@ -7,6 +7,12 @@ metrics:
7
  model-index:
8
  - name: mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1-seed20241201
9
  results: []
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,6 +20,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1-seed20241201
16
 
 
 
 
 
17
  This model is a fine-tuned version of [asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.0](https://huggingface.co/asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.0) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.6134
@@ -89,4 +99,4 @@ The following hyperparameters were used during training:
89
  - Transformers 4.33.3
90
  - Pytorch 2.5.1+cu121
91
  - Datasets 2.14.7
92
- - Tokenizers 0.13.3
 
7
  model-index:
8
  - name: mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1-seed20241201
9
  results: []
10
+ datasets:
11
+ - asadfgglie/nli-zh-tw-all
12
+ - asadfgglie/BanBan_2024-10-17-facial_expressions-nli
13
+ language:
14
+ - zh
15
+ pipeline_tag: zero-shot-classification
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  # mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1-seed20241201
22
 
23
+ This model use same hyper-parameter with [asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1](https://huggingface.co/asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.1), except `RANDOM_SEED`.
24
+
25
+ Original version use `RANDOM_SEED=42`, this version use `RANDOM_SEED=20241201`.
26
+
27
  This model is a fine-tuned version of [asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.0](https://huggingface.co/asadfgglie/mDeBERTa-v3-base-xnli-multilingual-zeroshot-v1.0) on the None dataset.
28
  It achieves the following results on the evaluation set:
29
  - Loss: 0.6134
 
99
  - Transformers 4.33.3
100
  - Pytorch 2.5.1+cu121
101
  - Datasets 2.14.7
102
+ - Tokenizers 0.13.3