license: apache-2.0 | |
base_model: Qwen/Qwen2.5-7B-Instruct | |
tags: | |
- qwen | |
- instruct | |
- bactgrow | |
- 7b | |
- fine-tuned | |
# pandoradox/qwen2.5-7b-instruct_bactgrow_150 | |
This is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the bactgrow dataset. | |
## Model Details | |
- **Base Model**: Qwen/Qwen2.5-7B-Instruct | |
- **Dataset**: bactgrow | |
- **Model Size**: 7b | |
- **Checkpoint**: 150 | |
- **Training Method**: LoRA (Low-Rank Adaptation) | |
## Usage | |
```python | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
model = AutoModelForCausalLM.from_pretrained("pandoradox/qwen2.5-7b-instruct_bactgrow_150") | |
tokenizer = AutoTokenizer.from_pretrained("pandoradox/qwen2.5-7b-instruct_bactgrow_150") | |
# Your inference code here | |
``` | |