Various useful datasets with preference optimization
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
liked
a model
1 day ago
hexgrad/Kokoro-82M
liked
a model
2 days ago
Triangle104/Q2.5-32B-Tatewaki-Kunou
liked
a model
2 days ago
Steelskull/L3.3-MS-Nevoria-70b
Organizations
models
142
nbeerbower/mistral-nemo-narwhal-12B
Text Generation
•
Updated
•
33
•
1
nbeerbower/mistral-nemo-bophades3-12B
Text Generation
•
Updated
•
43
•
2
nbeerbower/Gigaberg-Mistral-Large-123B
Text Generation
•
Updated
•
39
•
1
nbeerbower/mistral-nemo-kartoffel-12B
Text Generation
•
Updated
•
50
•
2
nbeerbower/mistral-nemo-gutenberg3-12B
Text Generation
•
Updated
•
101
•
3
nbeerbower/llama-3-gutenberg-8B
Text Generation
•
Updated
•
105
•
8
nbeerbower/SmolNemo-12B-FFT-experimental
Text Generation
•
Updated
•
11
nbeerbower/Nemo-Loony-12B-experimental
Text Generation
•
Updated
•
11
nbeerbower/Mistral-Nemo-Moderne-12B-FFT-experimental
Text Generation
•
Updated
•
14
•
1
nbeerbower/Mistral-Gutenberg-Doppel-7B-FFT
Text Generation
•
Updated
•
18
•
2
datasets
8
nbeerbower/cover-images
Viewer
•
Updated
•
5
•
351
•
1
nbeerbower/reddit-dpo
Viewer
•
Updated
•
76.9k
•
41
•
1
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
76
•
2
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
75
•
18
nbeerbower/Schule-DPO
Viewer
•
Updated
•
34
•
49
•
1
nbeerbower/Arkhaios-DPO
Viewer
•
Updated
•
222
•
101
•
8
nbeerbower/Purpura-DPO
Viewer
•
Updated
•
230
•
58
•
7
nbeerbower/bible-dpo
Viewer
•
Updated
•
31.1k
•
39