Initialize model card with placeholder
Browse files
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: apache-2.0
|
4 |
+
tags:
|
5 |
+
- sweelol-ai
|
6 |
+
- text-generation
|
7 |
+
- gemma
|
8 |
+
- distillation
|
9 |
+
- pruning
|
10 |
+
- lora
|
11 |
+
- prompt-tuning
|
12 |
+
---
|
13 |
+
|
14 |
+
# {model_name}
|
15 |
+
|
16 |
+
## Model Description
|
17 |
+
|
18 |
+
This model is part of the **Sweelol AI Hub** collection, resulting from experiments in efficient fine-tuning and knowledge distillation on the Gemma-3-270m architecture using the Databricks Dolly-15k dataset on Kaggle TPUs/GPUs.
|
19 |
+
|
20 |
+
**Full Research Notebook & Benchmark Results:** [Link to your final Kaggle Benchmark notebook here]
|
21 |
+
|
22 |
+
**Key Details:**
|
23 |
+
* **Base Model:** `google/gemma-3-270m`
|
24 |
+
* **Training Data:** Databricks Dolly-15k (subset)
|
25 |
+
* **Fine-Tuning Method:** {method_description}
|
26 |
+
* **Purpose:** {purpose}
|
27 |
+
|
28 |
+
This is a placeholder README. A detailed model card with full results and usage instructions will be added shortly.
|
29 |
+
|