mlconvexai's picture
Update README.md
9ec7a2c verified
metadata
base_model:
  - google/gemma-2-2b-it
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - gemma2
  - trl
license: gemma
language:
  - en
  - fi

The example uses a small dataset of 248 question-answer pairs from S-Group. While this small dataset will likely lead to overfitting, it effectively showcases the fine-tuning process using Unsloth. This repository is intended as a demonstration of the fine-tuning method, not a production-ready solution. Use repetition_penalty and no_repeat_ngram_size in prediction. Instructions and dataset are in GitHub.

Uploaded model

  • Developed by: mlconvexai
  • License: Gemma
  • Finetuned from model : google/gemma-2-2b-it

This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.