TurkishReasoner-Gemma3-1B
Model Description
TurkishReasoner-Gemma1B is a lightweight Turkish reasoning model fine-tuned from Google's Gemma3-1B. Despite its compact size, this model delivers impressive reasoning capabilities in Turkish, making it ideal for deployment in resource-constrained environments while maintaining high-quality step-by-step reasoning.
Key Features
- Built on Google's efficient Gemma3-1B foundation
- Fine-tuned specifically for Turkish reasoning tasks
- Optimized for deployment on devices with limited resources
- Delivers structured reasoning with clearly formatted solutions
- Efficient text-only processing for reasoning tasks
- 32K token context window
Technical Specifications
- Base Model: Google/Gemma3-1B
- Parameters: 1 billion
- Input: Text only
- Hardware Requirements: ~4GB VRAM
- Training Infrastructure: NVIDIA T4 GPU
Usage
This model is ideal for applications requiring reasoning capabilities in resource-constrained environments:
- Mobile applications with Turkish reasoning capabilities
- Educational tools for deployment on standard consumer hardware
- Embedded systems requiring compact reasoning abilities
- Local inference on personal computers with limited GPU resources
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-3-1b-it")
model = PeftModel.from_pretrained(base_model, "Chan-Y/TurkishReasoner-Gemma3-1B").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-1b-it")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
)
messages = [
{"role": "system", "content": """Sen kullanıcıların isteklerine Türkçe cevap veren bir asistansın ve sana bir problem verildi.
Problem hakkında düşün ve çalışmanı göster.
Çalışmanı <start_working_out> ve <end_working_out> arasına yerleştir.
Sonra, çözümünü <SOLUTION> ve </SOLUTION> arasına yerleştir.
Lütfen SADECE Türkçe kullan."""},
{"role": "user", "content": "121'in karekökü kaçtır?"},
]
response = pipe(messages, return_full_text=False)[0]["generated_text"]
print(response)
For more information or assistance with this model, please contact the developers:
- Cihan Yalçın: https://www.linkedin.com/in/chanyalcin/
- Şevval Nur Savcı: https://www.linkedin.com/in/%C5%9Fevval-nur-savc%C4%B1/
- Downloads last month
- 33
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support