gpt2-finetuned-qqp / README.md
PavanNeerudu's picture
Create README.md
00dacd3
metadata
language:
  - en
license: apache-2.0
datasets:
  - glue
metrics:
  - accuracy
model-index:
  - name: gpt2-finetuned-qqp
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: GLUE QQP
          type: glue
          args: qqp
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.89416

gpt2-finetuned-qqp

This model is GPT-2 fine-tuned on GLUE QQP dataset. It acheives the following results on the validation set

  • Accuracy: 0.89416

Model Details

GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. However, it acheives very good results on Text Classification tasks.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-5
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 123
  • optimizer: epsilon=1e-08
  • num_epochs: 3

Training results

Epoch Training Loss Training Accuracy Validation Loss Validation Accuracy
1 0.35743 0.83391 0.28496 0.87549
2 0.26334 0.88814 0.26964 0.89030
3 0.21890 0.91252 0.27717 0.89416