ankur310794 commited on
Commit
28b532d
·
1 Parent(s): d0a63c1
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -11,9 +11,9 @@ model-index:
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
  probably proofread and complete it, then remove this comment. -->
13
 
14
- # dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2
15
 
16
- This model(google/bert_uncased_L-12_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR).
17
  It achieves the following results on the evaluation set:
18
 
19
 
@@ -23,14 +23,15 @@ evaluation dataset: facebook-dpr-dev-dataset from official DPR github
23
 
24
  |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
25
  |---|---|---|---|---|---|---|---|---|
26
- |nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%|
 
27
  |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%|
28
 
29
  evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only.
30
 
31
  |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
32
  |---|---|---|---|---|---|---|---|---|
33
- |nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%|
34
  |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%|
35
 
36
  Note: * means we have evaluated on same eval dataset.
 
11
  <!-- This model card has been generated automatically according to the information Keras had access to. You should
12
  probably proofread and complete it, then remove this comment. -->
13
 
14
+ # dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2
15
 
16
+ This model(google/bert_uncased_L-2_H-128_A-2) was trained from scratch on training data: data.retriever.nq-adv-hn-train(facebookresearch/DPR).
17
  It achieves the following results on the evaluation set:
18
 
19
 
 
23
 
24
  |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
25
  |---|---|---|---|---|---|---|---|---|
26
+ |nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-dev dataset|6445|199795|60.53%|68.28%|76.07%|80.98%|91.45%|
27
+ |nlpconnect/dpr-ctx_encoder_bert_uncased_L-12_H-128_A-2(our)|nq-dev dataset|6445|199795|65.43%|71.99%|79.03%|83.24%|92.11%|
28
  |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-dev dataset|6445|199795|40.94%|49.27%|59.05%|66.00%|82.00%|
29
 
30
  evaluation dataset: UKPLab/beir test data but we have used first 2lac passage only.
31
 
32
  |model_name|data_name|num of queries|num of passages|R@10|R@20|R@50|R@100|R@100|
33
  |---|---|---|---|---|---|---|---|---|
34
+ |nlpconnect/dpr-ctx_encoder_bert_uncased_L-2_H-128_A-2(our)|nq-test dataset|3452|200001|49.68%|59.06%|69.40%|75.75%|89.28%|
35
  |*facebook/dpr-ctx_encoder-single-nq-base(hf/fb)|nq-test dataset|3452|200001|32.93%|43.74%|56.95%|66.30%|83.92%|
36
 
37
  Note: * means we have evaluated on same eval dataset.