Edit model card

Llama 2-13b-alpaca-spanish LoRA

This is a LoRA for Llama 2 13B trained on a translated alpaca dataset on an attempt to improve spanish performance of the Llama-2 foundation model with a conversational focus.

Base model used was The Bloke's Llama-2-13B-fp16 trained in 4bit precision with an added padding token.

Training parameteres
LoRA scale 2
Epochs 0.75
Learning Rate 2e-5
Warmup Steps 100
Loss 1.07
Downloads last month
28
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train marianbasti/Llama-2-13b-fp16-alpaca-spanish