Model Trained Using AutoTrain

  • Problem type: Extractive Question Answering

Validation Metrics

loss: 6.235438195290044e-05

exact_match: 99.7703

f1: 99.8851

runtime: 18.3183

samples_per_second: 77.627

steps_per_second: 9.717

: 2.0

Usage

import torch

from transformers import AutoModelForQuestionAnswering, AutoTokenizer

model = AutoModelForQuestionAnswering.from_pretrained(...)

tokenizer = AutoTokenizer.from_pretrained(...)

from transformers import BertTokenizer, BertForQuestionAnswering

question, text = "Who loves AutoTrain?", "Everyone loves AutoTrain"

inputs = tokenizer(question, text, return_tensors='pt')

start_positions = torch.tensor([1])

end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)

loss = outputs.loss

start_scores = outputs.start_logits

end_scores = outputs.end_logits
Downloads last month
8
Safetensors
Model size
124M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for VOKulus/my-model-test-roberta

Finetuned
(214)
this model