File size: 3,604 Bytes
041e322
 
d2249f0
 
041e322
d2249f0
 
041e322
 
 
 
d2249f0
 
 
 
041e322
d2249f0
041e322
 
 
 
d2249f0
041e322
d2249f0
 
041e322
 
d2249f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfccf81
d2249f0
 
cfccf81
d2249f0
 
 
 
 
cfccf81
d2249f0
 
 
 
 
 
 
 
 
 
cfccf81
 
 
 
 
d2249f0
 
 
041e322
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2249f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
library_name: peft
license: apache-2.0
pipeline_tag: text-generation
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets:
- GAIR/LIMO
tags:
- llama-factory
- lora
- generated_from_trainer
- chat
- Llama-3
- instruct
- finetune
model-index:
- name: llama-3.1-8b-instruct-limo-lora
  results: []
---


# llama-3.1-8b-instruct-limo-lora

This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) model. The fine-tuning was performed using Low-Rank Adaptation (LoRA) on the [LIMO dataset](https://huggingface.co/datasets/GAIR/LIMO) to enhance the model's reasoning capabilities, based on the work in the paper: [LIMO: Less is More for Reasoning](https://arxiv.org/pdf/2502.03387).

## Model description

- **Base Model**: [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
- **Fine-Tuning Dataset**: [GAIR/LIMO](https://huggingface.co/datasets/GAIR/LIMO)
- **Fine-Tuning Method**: Low-Rank Adaptation (LoRA)
- **Library Used**: [peft](https://github.com/huggingface/peft)
- **License**: [Apache 2.0](LICENSE)

## Usage

To utilize this model for text generation tasks, follow the steps below:

### Installation

Ensure you have the necessary libraries installed:

```bash
pip install torch transformers peft
```

### Generating Text

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load the base model
base_model_name = "meta-llama/Llama-3.1-8B-Instruct"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name, torch_dtype="auto", device_map="auto")

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)

# Load the LoRA adapter
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)

prompt = "How much is (2+5)x5/7"

# Tokenize the input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")

# Generate the output
output = model.generate(**inputs, max_length=8000)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```

### Merge the adapter and export merged model

```python
from peft import PeftModel
from transformers import AutoModelForCausalLM

base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")

# Load the LoRA adapter
adapter_path = "t83714/llama-3.1-8b-instruct-limo-lora-adapter"
model = PeftModel.from_pretrained(base_model, adapter_path)

merged_model = model.merge_and_unload()
merged_model.save_pretrained("./merged-model/")
```

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 15

### Framework versions

- PEFT 0.12.0
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0


## Acknowledgment

This model is trained based on the work of [Ye et al. (2025)](https://arxiv.org/abs/2502.03387). If you use this model, please also consider citing their paper:

```bibtex
@misc{ye2025limoreasoning,
      title={LIMO: Less is More for Reasoning}, 
      author={Yixin Ye and Zhen Huang and Yang Xiao and Ethan Chern and Shijie Xia and Pengfei Liu},
      year={2025},
      eprint={2502.03387},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.03387}, 
}
```