Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ The example uses a small dataset of 248 question-answer pairs from S-Group.
|
|
17 |
While this small dataset will likely lead to overfitting, it effectively showcases the fine-tuning process using Unsloth.
|
18 |
This repository is intended as a demonstration of the fine-tuning method, not a production-ready solution.
|
19 |
Use repetition_penalty and no_repeat_ngram_size in prediction.
|
20 |
-
|
21 |
|
22 |
|
23 |
# Uploaded model
|
|
|
17 |
While this small dataset will likely lead to overfitting, it effectively showcases the fine-tuning process using Unsloth.
|
18 |
This repository is intended as a demonstration of the fine-tuning method, not a production-ready solution.
|
19 |
Use repetition_penalty and no_repeat_ngram_size in prediction.
|
20 |
+
Instructions and dataset are in [GitHub](https://github.com/MLConvexAI/Gemma-2-2b-it-finetuning).
|
21 |
|
22 |
|
23 |
# Uploaded model
|