VOKulus's picture
Upload folder using huggingface_hub
50f5dc7 verified
metadata
library_name: transformers
tags:
  - autotrain
  - question-answering
base_model: deepset/roberta-base-squad2
widget:
  - text: Who loves AutoTrain?
    context: Everyone loves AutoTrain
datasets:
  - VOKulus/test

Model Trained Using AutoTrain

  • Problem type: Extractive Question Answering

Validation Metrics

loss: 6.235438195290044e-05

exact_match: 99.7703

f1: 99.8851

runtime: 18.3183

samples_per_second: 77.627

steps_per_second: 9.717

: 2.0

Usage

import torch

from transformers import AutoModelForQuestionAnswering, AutoTokenizer

model = AutoModelForQuestionAnswering.from_pretrained(...)

tokenizer = AutoTokenizer.from_pretrained(...)

from transformers import BertTokenizer, BertForQuestionAnswering

question, text = "Who loves AutoTrain?", "Everyone loves AutoTrain"

inputs = tokenizer(question, text, return_tensors='pt')

start_positions = torch.tensor([1])

end_positions = torch.tensor([3])

outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)

loss = outputs.loss

start_scores = outputs.start_logits

end_scores = outputs.end_logits