GenRead (MergeDPR): FiD model trained on NQ
-- This is the model checkpoint of GenRead [2], based on the T5-3B and trained on the NQ dataset [1].
-- Hyperparameters: 8 x 80GB A100 GPUs; batch size 16; AdamW; LR 3e-5; best dev at 19000 steps.
References:
[1] Natural Questions: A Benchmark for Question Answering Research. TACL 2019.
[2] Generate rather than Retrieve: Large Language Models are Strong Context Generators. arXiv 2022
Model performance
We evaluate it on the TriviaQA dataset, the EM score is 54.38.

license: cc-by-4.0
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.