Model Information
This project uses vinhnx90/gemma3-1b-thinking
, which is a PEFT (Parameter-Efficient Fine-Tuning) adapter for google/gemma-3-1b-it
. Unlike a full model, this is a lightweight adapter that works alongside the base model, making it easier to distribute and use with limited resources.
The model was trained using TRL with GRPO, a method introduced in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models.
Training Approach
This adapter was fine-tuned with Reinforcement Learning to enhance reasoning capabilities:
- Used reasoning chains from OpenAI's GSM8K dataset
- Implemented GRPO reward functions
- Based on Will Brown's approach
- Training implementation from Ben Burtenshaw's Colab
The adapter is available on HuggingFace: vinhnx90/gemma3-1b-thinking
Training Details
- Base Model: google/gemma-3-1b-it
- Library: transformers
- Training Method: GRPO (from DeepSeekMath paper)
- PEFT Method: LoRA (Low-Rank Adaptation)
- Framework Versions:
- TRL: 0.16.0.dev0
- Transformers: 4.50.0.dev0
- PEFT: 0.9.0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Requirements
torch
transformers
peft
Installation
- Clone this repository or download the script
- Install the required packages:
pip install torch transformers peft
Usage
Running with PEFT Adapter
Since this is a PEFT adapter, you need to load both the base model and the adapter:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
# Load the base model and tokenizer
base_model_id = "google/gemma-3-1b-it"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto", # Automatically determine the device
torch_dtype="auto" # Use the appropriate precision
)
# Load the PEFT adapter
adapter_model_id = "vinhnx90/gemma3-1b-thinking"
model = PeftModel.from_pretrained(model, adapter_model_id)
# Generate text
prompt = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
inputs = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Chat Format Example
For chat-formatted inputs:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model and tokenizer
base_model_id = "google/gemma-3-1b-it"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(
base_model_id,
device_map="auto",
torch_dtype="auto"
)
# Load the PEFT adapter
adapter_model_id = "vinhnx90/gemma3-1b-thinking"
model = PeftModel.from_pretrained(model, adapter_model_id)
# Prepare chat messages
messages = [
{"role": "user", "content": "Calculate the area of a circle with radius 5cm"}
]
# Format messages for the model
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate response
inputs = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=256,
do_sample=True,
temperature=0.7,
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Using the Pipeline API
For a simpler approach (note: this may download the full adapter model):
from transformers import pipeline
# Initialize the pipeline with the adapter model
generator = pipeline(
"text-generation",
model="vinhnx90/gemma3-1b-thinking",
model_kwargs={"device_map": "auto", "torch_dtype": "auto"}
)
# Generate text
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
output = generator(
[{"role": "user", "content": question}],
max_new_tokens=128,
do_sample=True,
temperature=0.7,
return_full_text=False
)[0]
print(output["generated_text"])
Available Command-Line Arguments
If you use the command-line script, the following arguments are available:
Argument | Description | Default |
---|---|---|
--prompt |
Input text for generation | "If you had a time machine..." |
--base-model |
Hugging Face base model name | "google/gemma-3-1b-it" |
--adapter |
Hugging Face adapter model name | "vinhnx90/gemma3-1b-thinking" |
--device |
Computing device (cpu, cuda, mps, or auto) | "auto" |
--max-tokens |
Maximum number of new tokens to generate | 128 |
--temperature |
Sampling temperature | 0.7 |
--top-p |
Top-p sampling parameter | 0.9 |
Citations
Implementation References
- Will Brown's Approach: GitHub Gist
- Ben Burtenshaw's Implementation: Twitter/X Post
GRPO
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
TRL
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
PEFT
@misc{peft,
title = {{PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware}},
author = {Younes Belkada and Thomas Wang and Yasmine Manar and Ajay Brahmakshatriya and Huu Nguyen and Yongwei Zhou and Soumya Batra and Neil Band and Romi Ponciano and Suraj Patil and Colin Raffel and Siddhartha Kamalakara and Enrico Shippole and Vesselin Popov and Lewis Tunstall and Brian Mugo and Patrick von Platen and Clémentine Fourrier and Surya Dantuluri and Luke Vilnis and Adam P. Saxton},
year = 2023,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/peft}}
}
License
This project is licensed under the same license as the base model.
Contributing
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.