File size: 3,862 Bytes
d6ddf15
 
 
 
 
 
599f31d
d6ddf15
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
language:
- en
- ru
license: apache-2.0
pipeline_tag: text-generation
base_model: nyuuzyou/SmolLM2-1.7B-Eagle
datasets: nyuuzyou/EagleSFT
co2_eq_emissions:
  emissions: 11163 # in grams of CO2
  source: "Calculated based on power consumption and regional carbon intensity"
  training_type: "fine-tuning"
  geographical_location: "Kazan, Russia"
  hardware_used: "1 RTX 5090 GPU"
---
# SmolLM2-1.7B-Eagle
SmolLM2-1.7B-Eagle-GGUF is a GGUF conversion of the [SmolLM2-1.7B-Eagle](https://huggingface.co/nyuuzyou/SmolLM2-1.7B-Eagle) model, which itself is a fine-tuned version of [SmolLM2-1.7B](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B) on the [EagleSFT](https://huggingface.co/datasets/nyuuzyou/EagleSFT) dataset. This model is designed to improve capabilities in both Russian and English language tasks while being optimized for efficient local deployment.

## Model Description
SmolLM2-1.7B-Eagle is a lightweight language model that has been fine-tuned specifically to handle bilingual content. This fine-tuning extends the base model's capabilities to better understand and generate content in Russian while maintaining its English competency.

### Base Model
The model is built upon SmolLM2-1.7B, a compact language model with 360 million parameters that offers a good balance between performance and resource requirements.

## Fine-tuning Details
### Dataset
The model was fine-tuned on the EagleSFT dataset, which contains 536,231 pairs of human questions and machine-generated responses in both Russian and English languages. The dataset primarily focuses on educational content but also includes everyday questions and casual conversations.

### Environmental Impact
- **Training duration**: 79.73h total in Kazan, Russia
- **Power consumption**: 400W average
- **Hardware**: 1 x RTX 5090
- **Carbon emissions**: Approximately 11.16 kg CO2eq
  - Calculated based on average power consumption and average CO2eq/kWh (350g) in this region
  - Kazan: 400W * 79.73h * 350g/kWh = 11.16 kg CO2eq

### Training Parameters
- **Training approach**: Supervised Fine-Tuning (SFT)
- **Training epochs**: 2
- **Learning rate**: 3.0e-04
- **Precision**: bfloat16

## Limitations and Capabilities
It's important to note that this model was not pre-trained but only underwent SFT on a relatively small number of tokens. This means that the model has a limited amount of data to rely on when answering in Russian compared to its English capabilities.

Despite extensive limitations, the model shows minimal improvement in:
- Basic recognition of Russian prompts (though with frequent misunderstandings)
- Handling simple tasks formatted as "{question in Russian}, answer in English"
- Basic translation from Russian to English (though quality remains poor)

The model's minimal understanding of Russian language comes solely from the supervised fine-tuning process without any proper pre-training with Russian text corpus, resulting in severely limited capabilities.

## Experimental Capabilities
The model demonstrates some experimental capabilities, but with significant limitations:
- Basic Russian text understanding (with frequent errors and misinterpretations)
- Limited question answering in Russian (quality significantly lower than English)
- Basic Russian to English translation (better than English to Russian)

## Limitations
- **NOT SUITABLE FOR PRODUCTION USE**: This model should not be used in production environments in any form
- Extremely limited knowledge base for Russian language due to lack of pre-training with Russian text
- Unoptimized tokenizer performance for Russian language results in inefficient token usage
- Output quality in Russian will be unsatisfactory for most use cases
- May produce inaccurate, inconsistent, or inappropriate responses, especially in Russian
- All limitations of the base SmolLM2-1.7B model still apply