Update README.md
Browse files
README.md
CHANGED
@@ -10,4 +10,30 @@ pipeline_tag: text-generation
|
|
10 |
library_name: transformers
|
11 |
tags:
|
12 |
- text-generation-inference
|
13 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
library_name: transformers
|
11 |
tags:
|
12 |
- text-generation-inference
|
13 |
+
---
|
14 |
+
|
15 |
+
# **OpenR1-Distill-7B-F32-GGUF**
|
16 |
+
|
17 |
+
> OpenR1-Distill-7B-F32-GGUF is a quantized version of OpenR1-Distill-7B, which is a post-trained model based on Qwen/Qwen2.5-Math-7B. It was further trained on Mixture-of-Thoughts, a curated dataset of 350k verified reasoning traces distilled from DeepSeek-R1. The dataset covers tasks in mathematics, coding, and science, and is designed to teach language models to reason step-by-step.
|
18 |
+
|
19 |
+
## Model File
|
20 |
+
|
21 |
+
| File Name | Size | Format | Notes |
|
22 |
+
|------------------------------------------|---------|--------|----------------------------------|
|
23 |
+
| OpenR1-Distill-7B.BF16.gguf | 15.2 GB | GGUF | BF16 precision model |
|
24 |
+
| OpenR1-Distill-7B.F16.gguf | 15.2 GB | GGUF | FP16 precision model |
|
25 |
+
| OpenR1-Distill-7B.F32.gguf | 30.5 GB | GGUF | FP32 precision model |
|
26 |
+
| OpenR1-Distill-7B.Q2_K.gguf | 3.02 GB | GGUF | 2-bit quantized (Q2_K) model |
|
27 |
+
| OpenR1-Distill-7B.Q4_K_M.gguf | 4.68 GB | GGUF | 4-bit quantized (Q4_K_M) model |
|
28 |
+
| .gitattributes | 1.84 kB | Text | Git LFS tracking config |
|
29 |
+
| config.json | 31 B | JSON | Model configuration file |
|
30 |
+
| README.md | 213 B | Markdown | This readme file |
|
31 |
+
|
32 |
+
## Quants Usage
|
33 |
+
|
34 |
+
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
35 |
+
|
36 |
+
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
37 |
+
types (lower is better):
|
38 |
+
|
39 |
+

|