Various useful datasets with preference optimization
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
liked
a model
1 day ago
Menlo/Jan-nano
liked
a model
3 days ago
MiniMaxAI/MiniMax-M1-80k
updated
a collection
8 days ago
Qwen3
Organizations
Collections
8
models
217

nbeerbower/Yanfei-v2-Qwen3-32B
Text Generation
•
Updated
•
64
•
1

nbeerbower/Menghua-Qwen3-32B-lora
Updated
•
14

nbeerbower/Zhiming-Qwen3-32B-lora
Updated
•
12

nbeerbower/Mistral-Nemo-Gutenberg-Vitus-12B
Updated
•
25

nbeerbower/Vitus-Qwen3-14B
Updated
•
40

nbeerbower/Vitus-mistral-nemo-12B
Updated
•
41

nbeerbower/Yanfei-Qwen3-32B
Text Generation
•
Updated
•
56
•
1

nbeerbower/Schreiber-mistral-nemo-12B
Text Generation
•
Updated
•
154
•
1

nbeerbower/Qwen3-Gutenberg-Encore-14B
Text Generation
•
Updated
•
88
•
4

nbeerbower/Mistral-Nemo-Gutenberg-Encore-12B
Text Generation
•
Updated
•
213
•
10
datasets
14
nbeerbower/YanfeiMix-DPO
Viewer
•
Updated
•
22.4k
•
113
nbeerbower/human-writing-dpo
Updated
•
116
•
2
nbeerbower/NikuX2-DPO
Viewer
•
Updated
•
333
•
116
nbeerbower/NikuX1-DPO
Viewer
•
Updated
•
20
•
159
nbeerbower/cover-images
Viewer
•
Updated
•
10
•
666
•
1
nbeerbower/synthetic-fiction-dpo
Viewer
•
Updated
•
550
•
162
•
1
nbeerbower/GreatFirewall-DPO
Viewer
•
Updated
•
492
•
85
•
9
nbeerbower/reddit-dpo
Viewer
•
Updated
•
76.9k
•
41
•
1
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
79
•
3
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
72
•
20