Edit model card

XLM-RoBERTa for Spanish Metaphor Detection

This model is a fine-tuned version of XLM-RoBERTa-large on CoMeta dataset, for metaphor detection in Spanish at token level. This model is presented in our paper Leveraging a New Spanish Corpus for Multilingual and Cross-lingual Metaphor Detection

Model Sources

Training & Testing Data

CoMeta

Training Hyperparameters

  • Batch size: 8
  • Weight Decay: 0.01
  • Learning Rate: 0.00002
  • Epochs: 4

Results

  • F1: 67.44
  • Precision: 75.57
  • Recall: 60.88

Label Dictionary

{
  "LABEL_0": "B-METAPHOR",
  "LABEL_1": "I-METAPHOR",
  "LABEL_2": "O"
}

Citation

If you use this model, please cite our work:

@inproceedings{sanchez-bayona-agerri-2022-leveraging,
    title = "Leveraging a New {S}panish Corpus for Multilingual and Cross-lingual Metaphor Detection",
    author = "Sanchez-Bayona, Elisa  and
      Agerri, Rodrigo",
    editor = "Fokkens, Antske  and
      Srikumar, Vivek",
    booktitle = "Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.conll-1.16",
    doi = "10.18653/v1/2022.conll-1.16",
    pages = "228--240",
    abstract = "The lack of wide coverage datasets annotated with everyday metaphorical expressions for languages other than English is striking. This means that most research on supervised metaphor detection has been published only for that language. In order to address this issue, this work presents the first corpus annotated with naturally occurring metaphors in Spanish large enough to develop systems to perform metaphor detection. The presented dataset, CoMeta, includes texts from various domains, namely, news, political discourse, Wikipedia and reviews. In order to label CoMeta, we apply the MIPVU method, the guidelines most commonly used to systematically annotate metaphor on real data. We use our newly created dataset to provide competitive baselines by fine-tuning several multilingual and monolingual state-of-the-art large language models. Furthermore, by leveraging the existing VUAM English data in addition to CoMeta, we present the, to the best of our knowledge, first cross-lingual experiments on supervised metaphor detection. Finally, we perform a detailed error analysis that explores the seemingly high transfer of everyday metaphor across these two languages and datasets.",
}

Dataset Card Contact

{elisa.sanchez, rodrigo.agerri}@ehu.eus

Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train HiTZ/xlm-roberta-large-metaphor-detection-es

Collection including HiTZ/xlm-roberta-large-metaphor-detection-es