Enthusiast Models
Collection
Models for 16GB+ VRAM
•
2 items
•
Updated
32K context requires only 2GB VRAM, so you can fit Q4_K_M + 32k context on a single 24GB GPU, best in class for 32B dense models.
Thudm for the excellent GLM-4 Base
nyu-dice-lab for WildChat-50M
AI2 for the Tulu dataset