Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.

Downloads last month
216
Safetensors
Model size
7.29B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ezelikman/quietstar-8-ahead

Merges
2 models
Quantizations
1 model

Dataset used to train ezelikman/quietstar-8-ahead