YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

This is a model released from the preprint: SimPO: Simple Preference Optimization with a Reference-Free Reward Please refer to our repository for more details.

Downloads last month
3,052
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Providers NEW
Input a message to start chatting with princeton-nlp/Llama-3-Base-8B-SFT-ORPO.

Model tree for princeton-nlp/Llama-3-Base-8B-SFT-ORPO

Quantizations
1 model

Spaces using princeton-nlp/Llama-3-Base-8B-SFT-ORPO 2

Collection including princeton-nlp/Llama-3-Base-8B-SFT-ORPO