π psresearch/RE_scholarly_text_deberta_v3_large
A DeBERTa-v3-large
model fine-tuned for Relation Extraction (RE) on scholarly documents that mention software. This model identifies semantic relationships (e.g., Developer_of
, Version_of
) between software-related entities in academic text.
π§ͺ Training Data
This model was trained on the following dataset:
wanted to load and run this model check this - submission_recreate.ipynb The dataset contains annotated relationships between named entities found in scholarly papers related to software engineering.
π Metrics
on testset
Relation | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
Developer_of | 0.2344 | 0.7500 | 0.3571 | 20 |
Citation_of | 0.5321 | 0.7968 | 0.6381 | 187 |
Version_of | 0.3901 | 0.7396 | 0.5108 | 96 |
PlugIn_of | 0.1013 | 0.6154 | 0.1739 | 13 |
URL_of | 0.4701 | 0.7857 | 0.5882 | 70 |
License_of | 0.0000 | 0.0000 | 0.0000 | 0 |
AlternativeName_of | 0.6522 | 0.8824 | 0.7500 | 17 |
Release_of | 0.5263 | 1.0000 | 0.6897 | 10 |
Abbreviation_of | 0.5000 | 0.5000 | 0.5000 | 12 |
Extension_of | 0.0000 | 0.0000 | 0.0000 | 6 |
Specification_of | 0.0000 | 0.0000 | 0.0000 | 0 |
Micro Avg | 0.4240 | 0.7633 | 0.5452 | 431 |
Macro Avg | 0.3785 | 0.6744 | 0.4675 | 431 |
Weighted Avg | 0.4599 | 0.7633 | 0.5675 | 431 |
π Model Comparison
Task | Model / Setup | Precision | Recall | F1 |
---|---|---|---|---|
RE | DeBERTa-V3-Large | 0.1025 | 0.4117 | 0.1543 |
RE | Modern BERT-Large | 0.0878 | 0.4228 | 0.1379 |
RE | DeBERTa-V3-Large (Augmented Data) | 0.3785 | 0.6744 | 0.4675 |
RE | Modern BERT-Large (Augmented Data) | 0.3473 | 0.6702 | 0.4384 |
π·οΈ Label Mapping
{
"Developer_of": 0,
"URL_of": 1,
"Version_of": 2,
"Citation_of": 3,
"PlugIn_of": 4,
"Extension_of": 5,
"Specification_of": 6,
"no_relation": 7,
"Release_of": 8,
"Abbreviation_of": 9,
"License_of": 10,
"AlternativeName_of": 11
}
- Downloads last month
- 35
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Dataset used to train psresearch/RE_scholarly_text_deberta_v3_large
Evaluation results
- Micro F1 on NER-RE-for-Software-Mentionsself-reported0.545
- Macro F1 on NER-RE-for-Software-Mentionsself-reported0.468
- Weighted F1 on NER-RE-for-Software-Mentionsself-reported0.568