t83714's picture
Upload limo-lora-r4-atten-layers-only-qwen-32b-instruct-t0.0_k1_s0_e500.jsonl (#1)
28b6cfa verified
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: peft
license: apache-2.0
datasets:
- GAIR/LIMO
pipeline_tag: text-generation
---
# qwen2.5-32b-instruct-limo-lora-adapter
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) model. The fine-tuning was performed using Low-Rank Adaptation (LoRA) on the [LIMO dataset](https://huggingface.co/datasets/GAIR/LIMO) to enhance the model's reasoning capabilities, based on the work in the paper: [LIMO: Less is More for Reasoning](https://arxiv.org/pdf/2502.03387).
## Model description
- **Base Model**: [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct)
- **Fine-Tuning Dataset**: [GAIR/LIMO](https://huggingface.co/datasets/GAIR/LIMO)
- **Fine-Tuning Method**: Low-Rank Adaptation (LoRA)
- **Library Used**: [peft](https://github.com/huggingface/peft)
- **License**: [Apache 2.0](LICENSE)
## Usage
To utilize this model for text generation tasks, follow the steps below:
### Installation
Ensure you have the necessary libraries installed:
```bash
pip install torch transformers peft
```
### Generating Text
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model
base_model_name = "Qwen/Qwen2.5-32B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype="auto", device_map="auto")
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# Load the LoRA adapter
adapter_path = "t83714/qwen2.5-32b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)
prompt = "How much is (2+5)x5/7"
# Tokenize the input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate the output
output = model.generate(**inputs, max_length=8000)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
### Merge the adapter and export merged model
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-32B-Instruct")
# Load the LoRA adapter
adapter_path = "t83714/qwen2.5-32b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged-model/")
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- generation_max_length: 16384
- lr_scheduler_type: cosine
- num_epochs: 15
- lora rank: 8
- lora target layers:
- v_proj
- o_proj
- q_proj
- k_proj
## Eval Result
[Math 500](https://github.com/GAIR-NLP/LIMO/blob/main/eval/data/math/test.jsonl) pass@1: 85%
## Acknowledgment
This model is trained based on the work of [Ye et al. (2025)](https://arxiv.org/abs/2502.03387). If you use this model, please also consider citing their paper:
```bibtex
@misc{ye2025limoreasoning,
title={LIMO: Less is More for Reasoning},
author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
year={2025},
eprint={2502.03387},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.03387},
}
```