---
language:
- nl
license: llama2
---

## LLaMA-2-NL: Fine-tuned using LoRa and the original tokenizer

```
from transformers import AutoModelForCausalLM, AutoTokenizer

# take the original llama 2 tokenizer
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')

model = AutoModelForCausalLM.from_pretrained('llama-2-nl/Llama-2-7b-hf-lora-original')
```