|
--- |
|
library_name: peft |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
base_model: meta-llama/Llama-3.1-8B-Instruct |
|
datasets: |
|
- GAIR/LIMO |
|
tags: |
|
- llama-factory |
|
- lora |
|
- generated_from_trainer |
|
- chat |
|
- Llama-3 |
|
- instruct |
|
- finetune |
|
model-index: |
|
- name: llama-3.1-8b-instruct-limo-lora |
|
results: [] |
|
--- |
|
|
|
|
|
# llama-3.1-8b-instruct-limo-lora |
|
|
|
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model. The fine-tuning was performed using Low-Rank Adaptation (LoRA) on the [LIMO dataset](https://huggingface.co/datasets/GAIR/LIMO) to enhance the model's reasoning capabilities, based on the work in the paper: [LIMO: Less is More for Reasoning](https://arxiv.org/pdf/2502.03387). |
|
|
|
## Model description |
|
|
|
- **Base Model**: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) |
|
- **Fine-Tuning Dataset**: [GAIR/LIMO](https://huggingface.co/datasets/GAIR/LIMO) |
|
- **Fine-Tuning Method**: Low-Rank Adaptation (LoRA) |
|
- **Library Used**: [peft](https://github.com/huggingface/peft) |
|
- **License**: [Apache 2.0](LICENSE) |
|
|
|
## Usage |
|
|
|
To utilize this model for text generation tasks, follow the steps below: |
|
|
|
### Installation |
|
|
|
Ensure you have the necessary libraries installed: |
|
|
|
```bash |
|
pip install torch transformers peft |
|
``` |
|
|
|
### Generating Text |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
from peft import PeftModel |
|
|
|
# Load the base model |
|
base_model_name = "meta-llama/Llama-3.1-8B-Instruct" |
|
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype="auto", device_map="auto") |
|
|
|
# Load the tokenizer |
|
tokenizer = AutoTokenizer.from_pretrained(base_model_name) |
|
|
|
# Load the LoRA adapter |
|
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter" |
|
model = PeftModel.from_pretrained(base_model, adapter_path) |
|
|
|
prompt = "How much is (2+5)x5/7" |
|
|
|
# Tokenize the input |
|
inputs = tokenizer(prompt, return_tensors="pt").to("cuda") |
|
|
|
# Generate the output |
|
output = model.generate(**inputs, max_length=8000) |
|
print(tokenizer.decode(output[0], skip_special_tokens=True)) |
|
``` |
|
|
|
### Merge the adapter and export merged model |
|
|
|
```python |
|
from peft import PeftModel |
|
from transformers import AutoModelForCausalLM |
|
|
|
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") |
|
|
|
# Load the LoRA adapter |
|
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter" |
|
model = PeftModel.from_pretrained(base_model, adapter_path) |
|
|
|
merged_model = model.merge_and_unload() |
|
merged_model.save_pretrained("./merged-model/") |
|
``` |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-06 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments |
|
- lr_scheduler_type: cosine |
|
- num_epochs: 15 |
|
|
|
### Framework versions |
|
|
|
- PEFT 0.12.0 |
|
- Transformers 4.49.0 |
|
- Pytorch 2.6.0+cu124 |
|
- Datasets 3.3.2 |
|
- Tokenizers 0.21.0 |
|
|
|
|
|
## Acknowledgment |
|
|
|
This model is trained based on the work of [Ye et al. (2025)](https://arxiv.org/abs/2502.03387). If you use this model, please also consider citing their paper: |
|
|
|
```bibtex |
|
@misc{ye2025limoreasoning, |
|
title={LIMO: Less is More for Reasoning}, |
|
author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu}, |
|
year={2025}, |
|
eprint={2502.03387}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL}, |
|
url={https://arxiv.org/abs/2502.03387}, |
|
} |
|
``` |