Yi-1.5-34B-32K finetuned via SFT on adamo1139/uninstruct-v1-experimental-chatml. Then trained via ORPO on adamo1139/rawrr_v2-2_stage1. It's an attempt to fix synthetic SFT contamination of original Yi-1.5-34B-32K.

Next up:

Cleaning and releasing AEZAKMI v4 dataset.

Training this model on it. Maybe adding some toxic-dpo-natural on it if needed. Releasing it.

Downloads last month
55
Safetensors
Model size
34.4B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for adamo1139/Yi-1.5-34B-32K-rebased-1406

Quantizations
2 models