Quantized using the default exllamav3 (0.0.2) quantization process.


image/png

EVA-Gutenberg3-Qwen2.5-32B

EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2 finetuned on jondurbin/gutenberg-dpo-v0.1, nbeerbower/gutenberg2-dpo, and nbeerbower/gutenberg-moderne-dpo.

Method

ORPO tuned with 8x A100 for 2 epochs.

Downloads last month
4
Safetensors
Model size
8.88B params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MetaphoricalCode/EVA-Gutenberg3-Qwen2.5-32B-exl3-4bpw-hb6

Base model

Qwen/Qwen2.5-32B
Quantized
(16)
this model

Datasets used to train MetaphoricalCode/EVA-Gutenberg3-Qwen2.5-32B-exl3-4bpw-hb6