Logo

How to Get Started with the Model

Use the code below to get started with the model.

import torch
from transformers import AutoTokenizer, XLMRobertaForSequenceClassification

MODEL_PATH = "upb-nlp/xlm_roberta_large_article_same_topic_classification"

tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = XLMRobertaForSequenceClassification.from_pretrained(MODEL_PATH, num_labels=2).to('cuda')
model.eval()

t1 = "Article title. Article body."
t2 = "Article title. Article body."

inputs = tokenizer(
    t1,
    t2,
    return_tensors="pt",
    truncation=True,
    padding='max_length',
    max_length=512
).to('cuda')

# Generate prediction
with torch.no_grad():
    outputs = model(**inputs)
    logits = outputs.logits
    predicted_class = torch.argmax(logits, dim=1).item()

print(predicted_class)
Downloads last month
11
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for upb-nlp/xlm_roberta_large_article_same_topic_classification

Finetuned
(657)
this model

Dataset used to train upb-nlp/xlm_roberta_large_article_same_topic_classification