RoBERTa Large trained on Social Interaction QA dataset using HuggingFace's Script for training Multiple Choice QA models. The model was trained for the EACL 2023 paper: MetaQA: Combining Expert Agents for Multi-Skill Question Answering. More information on: https://arxiv.org/abs/2112.01922 The average performance of five models trained with different random seeds on the on the test set is 74.17 ± 0.64

Downloads last month
130
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support multiple-choice models for transformers library.

Dataset used to train haritzpuerto/roberta_large_social_i_qa