ArunKr's picture
Upload README.md with huggingface_hub
52b01a0 verified
metadata
language:
  - en
license: apache-2.0
tags:
  - text-generation
  - instruct
  - manim
  - lora
  - gguf
datasets:
  - ArunKr/verified-data-manim
base_model: HuggingFaceTB/SmolLM-135M-Instruct
library_name: transformers
pipeline_tag: text-generation

gemma-3-270m-it-web-agent - Fine-tuned

This repository contains three variants of the model:

Training

  • Base model: unsloth/gemma-3-270m-it
  • Dataset: ArunKr/gui_grounding_dataset-100
  • Method: LoRA fine-tuning with Unsloth

Quantizations

We provide f16, bf16, f32, and q8_0 GGUF files for llama.cpp / Ollama.

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer

tok = AutoTokenizer.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
model = AutoModelForCausalLM.from_pretrained("ArunKr/gemma-3-270m-it-web-agent-16bit")
print(model.generate(**tok("Hello", return_tensors="pt")))

Ollama Example

ollama run ArunKr/SmolLM-135M-Instruct-manim-gguf:<file_name>.gguf

www.ollama.com