my favorite all-purpose models from 12B to 70B, all uncensored and using ChatML
Nicholas Beerbower PRO
nbeerbower
AI & ML interests
QLoRA finetuning and merging LLMs for fun
Recent Activity
liked
a dataset
about 8 hours ago
huihui-ai/Guilherme34_uncensor
liked
a dataset
about 10 hours ago
openai/mrcr
liked
a model
2 days ago
all-hands/openhands-lm-7b-v0.1
Organizations
models
195

nbeerbower/Llama3-SkullGang-70B
Text Generation
•
Updated
•
10

nbeerbower/Llama3-Asobi-70B
Text Generation
•
Updated
•
14

nbeerbower/UwU-Qwen2.5-32B
Text Generation
•
Updated
•
23
•
4

nbeerbower/Llama3-Sapientia-70B
Text Generation
•
Updated
•
17
•
1

nbeerbower/Shiina-Qwen2.5-32B
Text Generation
•
Updated
•
17

nbeerbower/BigKartoffel-mistral-nemo-20B
Text Generation
•
Updated
•
25
•
3

nbeerbower/Azura-Qwen2.5-32B
Text Generation
•
Updated
•
9

nbeerbower/PirateShip-ChatML-4x12B
Updated
•
8

nbeerbower/Kawaiides-llama3.1-70B
Text Generation
•
Updated
•
5

nbeerbower/Kartoffel-Deepfry-12B
Text Generation
•
Updated
•
22
•
1
datasets
9
nbeerbower/cover-images
Viewer
•
Updated
•
7
•
372
•
1
nbeerbower/GreatFirewall-DPO
Viewer
•
Updated
•
492
•
52
•
8
nbeerbower/reddit-dpo
Viewer
•
Updated
•
76.9k
•
42
•
1
nbeerbower/gutenberg-moderne-dpo
Viewer
•
Updated
•
346
•
34
•
3
nbeerbower/gutenberg2-dpo
Viewer
•
Updated
•
293
•
44
•
19
nbeerbower/Schule-DPO
Viewer
•
Updated
•
34
•
30
•
1
nbeerbower/Arkhaios-DPO
Viewer
•
Updated
•
222
•
49
•
8
nbeerbower/Purpura-DPO
Viewer
•
Updated
•
230
•
63
•
8
nbeerbower/bible-dpo
Viewer
•
Updated
•
31.1k
•
43