GLM-4 with hopefully fixed context and some Alpaca eval maxxing. This checkpoint is SFT only and I'm planning to do APO, but still need to gen ~10K responses for training and finish up my anti-rep preference dataset.

32K context requires only 2GB VRAM, so you can fit Q4_K_M + 32k context on a single 24GB GPU, best in class for 32B dense models.

Chat Template:

GLM-4.1

Major Thanks

Thudm for the excellent GLM-4 Base

nyu-dice-lab for WildChat-50M

AI2 for the Tulu dataset

Downloads last month
26
Safetensors
Model size
32.6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ConicCat/GL-Marvin-32k-32B

Finetuned
(2)
this model
Quantizations
4 models

Datasets used to train ConicCat/GL-Marvin-32k-32B

Collection including ConicCat/GL-Marvin-32k-32B