File size: 1,941 Bytes
8001389
 
 
 
 
 
 
 
 
 
 
 
2e15f00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
datasets:
- open-r1/Mixture-of-Thoughts
language:
- en
base_model:
- open-r1/OpenR1-Distill-7B
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
---

# **OpenR1-Distill-7B-F32-GGUF**

> OpenR1-Distill-7B-F32-GGUF is a quantized version of OpenR1-Distill-7B, which is a post-trained model based on Qwen/Qwen2.5-Math-7B. It was further trained on Mixture-of-Thoughts, a curated dataset of 350k verified reasoning traces distilled from DeepSeek-R1. The dataset covers tasks in mathematics, coding, and science, and is designed to teach language models to reason step-by-step.

## Model File 

| File Name                                | Size    | Format | Notes                            |
|------------------------------------------|---------|--------|----------------------------------|
| OpenR1-Distill-7B.BF16.gguf              | 15.2 GB | GGUF   | BF16 precision model             |
| OpenR1-Distill-7B.F16.gguf               | 15.2 GB | GGUF   | FP16 precision model             |
| OpenR1-Distill-7B.F32.gguf               | 30.5 GB | GGUF   | FP32 precision model             |
| OpenR1-Distill-7B.Q2_K.gguf              | 3.02 GB | GGUF   | 2-bit quantized (Q2_K) model     |
| OpenR1-Distill-7B.Q4_K_M.gguf            | 4.68 GB | GGUF   | 4-bit quantized (Q4_K_M) model   |
| .gitattributes                           | 1.84 kB | Text   | Git LFS tracking config          |
| config.json                              | 31 B    | JSON   | Model configuration file         |
| README.md                                | 213 B   | Markdown | This readme file                |

## Quants Usage 

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)