Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Quantization
Quantized using the default exllamav2 (0.2.9) quantization process.
Original model: https://huggingface.co/nbeerbower/Dumpling-Qwen2.5-32B-v2
exllamav2: https://github.com/turboderp-org/exllamav2
Original model card of Dumpling-Qwen2.5-32B-v2
nbeerbower/Rombos-EVAGutenberg-TIES-Qwen2.5-32B finetuned on:
- nbeerbower/GreatFirewall-DPO
- nbeerbower/Schule-DPO
- nbeerbower/Purpura-DPO
- nbeerbower/Arkhaios-DPO
- jondurbin/truthy-dpo-v0.1
- antiven0m/physical-reasoning-dpo
- flammenai/Date-DPO-NoAsterisks
- flammenai/Prude-Phi3-DPO
- Atsunori/HelpSteer2-DPO
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo.
Method
QLoRA ORPO tuned with 8x A100 for 2 epochs. Rank 64 LoRA, 2e-5 learning rate.
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for MetaphoricalCode/Dumpling-Qwen2.5-32B-v2-4.25bpw-h8-exl2
Finetuned
nbeerbower/Dumpling-Qwen2.5-32B-v2