llama3.1-gutenberg-8B

VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct finetuned on jondurbin/gutenberg-dpo-v0.1.

Method

Finetuned using 2x RTX 4060 for 3 epochs.

Fine-tune Llama 3 with ORPO

Downloads last month
25
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for nbeerbower/llama3.1-gutenberg-8B

Finetuned
(3)
this model
Merges
3 models
Quantizations
4 models

Dataset used to train nbeerbower/llama3.1-gutenberg-8B

Spaces using nbeerbower/llama3.1-gutenberg-8B 4