internlm2.5_7b_distill_orpo
Architecture

Base model
Datasets used for training
Created a preference optimization dataset PKU-SafeRLHF-orpo-72k from PKU-SafeRLHF-single-dimension
Download model
git lfs install
git clone https://huggingface.co/juneup/internlm2.5_7b_distill_orpo
If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/juneup/internlm2.5_7b_distill_orpo
Download at Ollama
ollama run Juneup/internlm2.5_7b_distill:orpo_q4_k_m
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support