matthieumeeus97's picture
Update README.md
888a074 verified
|
raw
history blame
370 Bytes
metadata
license: llama2
language:
  - nl

LLaMA-2-NL: Fine-tuned using LoRa and the original tokenizer

from transformers import AutoModelForCausalLM, AutoTokenizer

# take the original llama 2 tokenizer
tokenizer = AutoTokenizer.from_pretrained('meta-llama/Llama-2-7b-hf')

model = AutoModelForCausalLM.from_pretrained('llama-2-nl/Llama-2-7b-hf-lora-original')