🏠 Gemma Smart Lamp Assistant (Français)
Modèle IA complet pour contrôle de lampes connectées
🚀 Utilisation
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Charger le modèle
model_name = "TomSft15/gemma-3-smart-lamp-assistant-fr"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float32, # Pour CPU/Raspberry Pi
device_map="auto" # Pour GPU
)
# Contrôler la lampe
def control_lamp(instruction):
prompt = f"<bos><start_of_turn>user\n{instruction}<end_of_turn>\n<start_of_turn>model\n"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=32, temperature=0.1)
response = tokenizer.decode(outputs[0], skip_special_tokens=False)
# Extraire la réponse
start_marker = "<start_of_turn>model\n"
end_marker = "<end_of_turn>"
start_idx = response.find(start_marker)
if start_idx != -1:
start_idx += len(start_marker)
end_idx = response.find(end_marker, start_idx)
if end_idx != -1:
return response[start_idx:end_idx].strip()
return response
# Exemples
print(control_lamp("Allume la lampe")) # "J'ai allumé la lampe."
print(control_lamp("Couleur rouge")) # "La lampe est maintenant rouge."
print(control_lamp("Baisse à 50%")) # "La luminosité est à 50%."
📊 Performances
- Modèle : Gemma 2 2B + LoRA fine-tuning
- Précision : >90% sur commandes de base
- Compatible : CPU et GPU
- Taille : ~1.8GB (modèle complet)
🎯 Commandes supportées
- Allumage/Extinction : "Allume", "Éteins", "On", "Off"
- Couleurs : "Rouge", "Bleu", "Vert", "Jaune", "Blanc"
- Luminosité : "Plus fort", "Baisse", "50%", "Maximum"
✅ Modèle complet - Compatible avec AutoModelForCausalLM
Framework versions
- PEFT 0.15.2
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for TomSft15/gemma-3-smart-lamp-assistant-fr
Base model
unsloth/gemma-2-2b-it-bnb-4bit