Upload folder using huggingface_hub
Browse files- .gitattributes +6 -0
- README.md +65 -0
- SmolLM2-1.7B-Eagle-bf16.gguf +3 -0
- SmolLM2-1.7B-Eagle-f16.gguf +3 -0
- SmolLM2-1.7B-Eagle-f32.gguf +3 -0
- SmolLM2-1.7B-Eagle-q8_0.gguf +3 -0
- SmolLM2-1.7B-Eagle-tq1_0.gguf +3 -0
- SmolLM2-1.7B-Eagle-tq2_0.gguf +3 -0
.gitattributes
CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
SmolLM2-1.7B-Eagle-bf16.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
SmolLM2-1.7B-Eagle-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
SmolLM2-1.7B-Eagle-f32.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
SmolLM2-1.7B-Eagle-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
SmolLM2-1.7B-Eagle-tq1_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
SmolLM2-1.7B-Eagle-tq2_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- ru
|
5 |
+
license: apache-2.0
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
base_model: HuggingFaceTB/SmolLM2-1.7B
|
8 |
+
datasets: nyuuzyou/EagleSFT
|
9 |
+
co2_eq_emissions:
|
10 |
+
emissions: 11163 # in grams of CO2
|
11 |
+
source: "Calculated based on power consumption and regional carbon intensity"
|
12 |
+
training_type: "fine-tuning"
|
13 |
+
geographical_location: "Kazan, Russia"
|
14 |
+
hardware_used: "1 RTX 5090 GPU"
|
15 |
+
---
|
16 |
+
# SmolLM2-1.7B-Eagle
|
17 |
+
SmolLM2-1.7B-Eagle-GGUF is a GGUF conversion of the [SmolLM2-1.7B-Eagle](https://huggingface.co/nyuuzyou/SmolLM2-1.7B-Eagle) model, which itself is a fine-tuned version of [SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B) on the [EagleSFT](https://huggingface.co/datasets/nyuuzyou/EagleSFT) dataset. This model is designed to improve capabilities in both Russian and English language tasks while being optimized for efficient local deployment.
|
18 |
+
|
19 |
+
## Model Description
|
20 |
+
SmolLM2-1.7B-Eagle is a lightweight language model that has been fine-tuned specifically to handle bilingual content. This fine-tuning extends the base model's capabilities to better understand and generate content in Russian while maintaining its English competency.
|
21 |
+
|
22 |
+
### Base Model
|
23 |
+
The model is built upon SmolLM2-1.7B, a compact language model with 360 million parameters that offers a good balance between performance and resource requirements.
|
24 |
+
|
25 |
+
## Fine-tuning Details
|
26 |
+
### Dataset
|
27 |
+
The model was fine-tuned on the EagleSFT dataset, which contains 536,231 pairs of human questions and machine-generated responses in both Russian and English languages. The dataset primarily focuses on educational content but also includes everyday questions and casual conversations.
|
28 |
+
|
29 |
+
### Environmental Impact
|
30 |
+
- **Training duration**: 79.73h total in Kazan, Russia
|
31 |
+
- **Power consumption**: 400W average
|
32 |
+
- **Hardware**: 1 x RTX 5090
|
33 |
+
- **Carbon emissions**: Approximately 11.16 kg CO2eq
|
34 |
+
- Calculated based on average power consumption and average CO2eq/kWh (350g) in this region
|
35 |
+
- Kazan: 400W * 79.73h * 350g/kWh = 11.16 kg CO2eq
|
36 |
+
|
37 |
+
### Training Parameters
|
38 |
+
- **Training approach**: Supervised Fine-Tuning (SFT)
|
39 |
+
- **Training epochs**: 2
|
40 |
+
- **Learning rate**: 3.0e-04
|
41 |
+
- **Precision**: bfloat16
|
42 |
+
|
43 |
+
## Limitations and Capabilities
|
44 |
+
It's important to note that this model was not pre-trained but only underwent SFT on a relatively small number of tokens. This means that the model has a limited amount of data to rely on when answering in Russian compared to its English capabilities.
|
45 |
+
|
46 |
+
Despite extensive limitations, the model shows minimal improvement in:
|
47 |
+
- Basic recognition of Russian prompts (though with frequent misunderstandings)
|
48 |
+
- Handling simple tasks formatted as "{question in Russian}, answer in English"
|
49 |
+
- Basic translation from Russian to English (though quality remains poor)
|
50 |
+
|
51 |
+
The model's minimal understanding of Russian language comes solely from the supervised fine-tuning process without any proper pre-training with Russian text corpus, resulting in severely limited capabilities.
|
52 |
+
|
53 |
+
## Experimental Capabilities
|
54 |
+
The model demonstrates some experimental capabilities, but with significant limitations:
|
55 |
+
- Basic Russian text understanding (with frequent errors and misinterpretations)
|
56 |
+
- Limited question answering in Russian (quality significantly lower than English)
|
57 |
+
- Basic Russian to English translation (better than English to Russian)
|
58 |
+
|
59 |
+
## Limitations
|
60 |
+
- **NOT SUITABLE FOR PRODUCTION USE**: This model should not be used in production environments in any form
|
61 |
+
- Extremely limited knowledge base for Russian language due to lack of pre-training with Russian text
|
62 |
+
- Unoptimized tokenizer performance for Russian language results in inefficient token usage
|
63 |
+
- Output quality in Russian will be unsatisfactory for most use cases
|
64 |
+
- May produce inaccurate, inconsistent, or inappropriate responses, especially in Russian
|
65 |
+
- All limitations of the base SmolLM2-1.7B model still apply
|
SmolLM2-1.7B-Eagle-bf16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8187b3333ddf6140123b38d8191373c4d389c01cd1a497234620e3e436273de3
|
3 |
+
size 3424735712
|
SmolLM2-1.7B-Eagle-f16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ae14cf3490e8fde1565b3dbe95d018ebddf723657c51f7fd745b10cba8523ed
|
3 |
+
size 3424735712
|
SmolLM2-1.7B-Eagle-f32.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2361ddc963343b04794ff11aa6ef706fde8161d7c3dfda56e61d6af6c5a6ac0e
|
3 |
+
size 6847287776
|
SmolLM2-1.7B-Eagle-q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fb111256bcf4a14acee754c182ae98fa1d6b6d828d65dfec8e76c3d1be06581f
|
3 |
+
size 1820414432
|
SmolLM2-1.7B-Eagle-tq1_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a5b36a0fa9e9e581ecbf1fcf48bed8dcf3c48f75b9eebc3627f8d63cecff4251
|
3 |
+
size 543248864
|
SmolLM2-1.7B-Eagle-tq2_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90f5264dd5401b74f5bf05564b11ee4709caec2dadab713bd7e386a3189fabcd
|
3 |
+
size 618746336
|