File size: 3,724 Bytes
9741412
 
 
 
 
 
 
 
 
 
 
 
770c6e5
 
 
 
 
9741412
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
770c6e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model: THUDM/GLM-Z1-Rumination-32B-0414
language:
- zh
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
---

# Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF
This model was converted to GGUF format from [`THUDM/GLM-Z1-Rumination-32B-0414`](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/THUDM/GLM-Z1-Rumination-32B-0414) for more details on the model.

---
Introduction
-
The GLM family welcomes a new generation of open-source models, the GLM-4-32B-0414
 series, featuring 32 billion parameters. Its performance is comparable 
to OpenAI's GPT series and DeepSeek's V3/R1 series, and it supports very
 user-friendly local deployment features. GLM-4-32B-Base-0414 was 
pre-trained on 15T of high-quality data, including a large amount of 
reasoning-type synthetic data, laying the foundation for subsequent 
reinforcement learning extensions. In the post-training stage, in 
addition to human preference alignment for dialogue scenarios, we also 
enhanced the model's performance in instruction following, engineering 
code, and function calling using techniques such as rejection sampling 
and reinforcement learning, strengthening the atomic capabilities 
required for agent tasks. GLM-4-32B-0414 achieves good results in areas 
such as engineering code, Artifact generation, function calling, 
search-based Q&A, and report generation. Some benchmarks even rival 
larger models like GPT-4o and DeepSeek-V3-0324 (671B).

GLM-Z1-Rumination-32B-0414 is a deep reasoning model with rumination capabilities
 (benchmarked against OpenAI's Deep Research). Unlike typical deep 
thinking models, the rumination model employs longer periods of deep 
thought to solve more open-ended and complex problems (e.g., writing a 
comparative analysis of AI development in two cities and their future 
development plans). The rumination model integrates search tools during 
its deep thinking process to handle complex tasks and is trained by 
utilizing multiple rule-based rewards to guide and extend end-to-end 
reinforcement learning. Z1-Rumination shows significant improvements in 
research-style writing and complex retrieval tasks.

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/GLM-Z1-Rumination-32B-0414-Q3_K_S-GGUF --hf-file glm-z1-rumination-32b-0414-q3_k_s.gguf -c 2048
```