RuPERTa-base (Spanish RoBERTa) + POS ππ·
This model is a fine-tuned on CONLL CORPORA version of RuPERTa-base for POS downstream task.
Details of the downstream task (POS) - Dataset
Dataset | # Examples |
---|---|
Train | 445 K |
Dev | 55 K |
Labels covered:
ADJ
ADP
ADV
AUX
CCONJ
DET
INTJ
NOUN
NUM
PART
PRON
PROPN
PUNCT
SCONJ
SYM
VERB
Metrics on evaluation set π§Ύ
Metric | # score |
---|---|
F1 | 97.39 |
Precision | 97.47 |
Recall | 9732 |
Model in action π¨
Example of usage
import torch
from transformers import AutoModelForTokenClassification, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos')
model = AutoModelForTokenClassification.from_pretrained('mrm8488/RuPERTa-base-finetuned-pos')
id2label = {
"0": "O",
"1": "ADJ",
"2": "ADP",
"3": "ADV",
"4": "AUX",
"5": "CCONJ",
"6": "DET",
"7": "INTJ",
"8": "NOUN",
"9": "NUM",
"10": "PART",
"11": "PRON",
"12": "PROPN",
"13": "PUNCT",
"14": "SCONJ",
"15": "SYM",
"16": "VERB"
}
text ="Mis amigos estΓ‘n pensando viajar a Londres este verano."
input_ids = torch.tensor(tokenizer.encode(text)).unsqueeze(0)
outputs = model(input_ids)
last_hidden_states = outputs[0]
for m in last_hidden_states:
for index, n in enumerate(m):
if(index > 0 and index <= len(text.split(" "))):
print(text.split(" ")[index-1] + ": " + id2label[str(torch.argmax(n).item())])
'''
Output:
--------
Mis: NUM
amigos: PRON
estΓ‘n: AUX
pensando: ADV
viajar: VERB
a: ADP
Londres: PROPN
este: DET
verano..: NOUN
'''
Yeah! Not too bad π
Created by Manuel Romero/@mrm8488 | LinkedIn
Made with β₯ in Spain
- Downloads last month
- 36
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support