Edit model card

relbert/roberta-large-semeval2012-mask-prompt-d-loob

RelBERT fine-tuned from roberta-large on
relbert/semeval2012_relational_similarity. Fine-tuning is done via RelBERT library (see the repository for more detail). It achieves the following results on the relation understanding tasks:

  • Analogy Question (dataset, full result):
    • Accuracy on SAT (full): 0.7058823529411765
    • Accuracy on SAT: 0.7002967359050445
    • Accuracy on BATS: 0.8121178432462479
    • Accuracy on U2: 0.6973684210526315
    • Accuracy on U4: 0.6550925925925926
    • Accuracy on Google: 0.944
  • Lexical Relation Classification (dataset, full result):
    • Micro F1 score on BLESS: 0.9278288383305711
    • Micro F1 score on CogALexV: 0.8809859154929578
    • Micro F1 score on EVALution: 0.7177681473456122
    • Micro F1 score on K&H+N: 0.9682131181748627
    • Micro F1 score on ROOT09: 0.914133500470072
  • Relation Mapping (dataset, full result):
    • Accuracy on Relation Mapping: 0.8978174603174603

Usage

This model can be used through the relbert library. Install the library via pip

pip install relbert

and activate model as below.

from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-d-loob")
vector = model.get_embedding(['Tokyo', 'Japan'])  # shape of (1024, )

Training hyperparameters

The following hyperparameters were used during training:

  • model: roberta-large
  • max_length: 64
  • mode: mask
  • data: relbert/semeval2012_relational_similarity
  • template_mode: manual
  • template: I wasn’t aware of this relationship, but I just read in the encyclopedia that is the
  • loss_function: info_loob
  • temperature_nce_constant: 0.05
  • temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
  • epoch: 22
  • batch: 128
  • lr: 5e-06
  • lr_decay: False
  • lr_warmup: 1
  • weight_decay: 0
  • random_seed: 0
  • exclude_relation: None
  • n_sample: 640
  • gradient_accumulation: 8

The full configuration can be found at fine-tuning parameter file.

Reference

If you use any resource from RelBERT, please consider to cite our paper.


@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
    title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
    author = "Ushio, Asahi  and
      Schockaert, Steven  and
      Camacho-Collados, Jose",
    booktitle = "EMNLP 2021",
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
}
Downloads last month
8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train research-backup/roberta-large-semeval2012-mask-prompt-d-loob

Evaluation results