DeepSeek-R1-DRAFT-Qwen2.5-0.5B-GGUF

Updated to v1

This model is trained on outputs of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B and is meant to be used only as draft model for speculative decoding.

It's specifically intended for users of 3090/4090, allowing you to run the DeepSeek-R1-Distill-Qwen-32B-Q4_K_M GGUF version with 16k context and speeding up generation without sacrificing more context length or model quality.

Data info

The data consists of code, math, reasoning and general knowledge tasks collected from various datasets. It has been trained for 2 epochs on 7k unique examples, for a total of 26 million tokens per epoch.

Since data generation was done using spare GPU time, I may publish a further trained version later.

Downloads last month
265
GGUF
Model size
494M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-GGUF

Base model

Qwen/Qwen2.5-0.5B
Quantized
(7)
this model

Dataset used to train alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-GGUF

Collection including alamios/DeepSeek-R1-DRAFT-Qwen2.5-0.5B-GGUF