Fine-tuned SmolLM Model
This model is a fine-tuned version of HuggingFaceTB/SmolLM2-1.7B-Instruct.
Training Details
- Base Model: HuggingFaceTB/SmolLM2-1.7B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Hardware: NVIDIA RTX 3050 (4GB VRAM)
- Framework: PyTorch + Transformers + PEFT
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model
tokenizer = AutoTokenizer.from_pretrained("terrytaylorbonn/431_smollm-model")
model = AutoModelForCausalLM.from_pretrained("terrytaylorbonn/431_smollm-model")
# Generate text
messages = [{"role": "user", "content": "Your prompt here"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
response = tokenizer.decode(outputs[0][len(inputs.input_ids[0]):], skip_special_tokens=True)
print(response)
Training Configuration
- Batch Size: 1 (with gradient accumulation)
- Learning Rate: 2e-4
- LoRA Rank: 16
- LoRA Alpha: 32
- Training Steps: Variable based on dataset
Limitations
This model inherits the limitations of the base model and may have additional biases from the fine-tuning data.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for terrytaylorbonn/431_smollm-model
Base model
HuggingFaceTB/SmolLM2-1.7B
Quantized
HuggingFaceTB/SmolLM2-1.7B-Instruct