File size: 1,130 Bytes
1b321d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
license: apache-2.0
library_name: transformers
base_model:
- nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
base_model_relation: quantized
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- nbeerbower/gutenberg-moderne-dpo
---
## Quantized using the default exllamav3 (0.0.2) quantization process.

- Original model: https://huggingface.co/nbeerbower/EVA-Gutenberg3-Qwen2.5-32B
- exllamav3: https://github.com/turboderp-org/exllamav3
---
![image/png](https://huggingface.co/nbeerbower/mistral-nemo-gutenberg3-12B/resolve/main/gutenberg3.png?download=true)

# EVA-Gutenberg3-Qwen2.5-32B

[EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2) finetuned on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1), [nbeerbower/gutenberg2-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg2-dpo), and [nbeerbower/gutenberg-moderne-dpo](https://huggingface.co/datasets/nbeerbower/gutenberg-moderne-dpo).

### Method

[ORPO tuned](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) with 8x A100 for 2 epochs.