Instructions to use Ex0bit/GLM-4.7-Flash-PRISM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Ex0bit/GLM-4.7-Flash-PRISM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Ex0bit/GLM-4.7-Flash-PRISM") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Ex0bit/GLM-4.7-Flash-PRISM") model = AutoModelForCausalLM.from_pretrained("Ex0bit/GLM-4.7-Flash-PRISM") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - llama-cpp-python
How to use Ex0bit/GLM-4.7-Flash-PRISM with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Ex0bit/GLM-4.7-Flash-PRISM", filename="GLM-4.7-Flash-PRISM-GGUFs/GLM-4.7-Flash-PRISM-IQ4_NL.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Ex0bit/GLM-4.7-Flash-PRISM with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Use Docker
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use Ex0bit/GLM-4.7-Flash-PRISM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Ex0bit/GLM-4.7-Flash-PRISM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- SGLang
How to use Ex0bit/GLM-4.7-Flash-PRISM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Ex0bit/GLM-4.7-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Ex0bit/GLM-4.7-Flash-PRISM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ex0bit/GLM-4.7-Flash-PRISM", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Ollama
How to use Ex0bit/GLM-4.7-Flash-PRISM with Ollama:
ollama run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- Unsloth Studio new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Ex0bit/GLM-4.7-Flash-PRISM to start chatting
- Pi new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Pi:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Configure the model in Pi
# Install Pi: npm install -g @mariozechner/pi-coding-agent # Add to ~/.pi/agent/models.json: { "providers": { "llama-cpp": { "baseUrl": "http://localhost:8080/v1", "api": "openai-completions", "apiKey": "none", "models": [ { "id": "Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M" } ] } } }Run Pi
# Start Pi in your project directory: pi
- Hermes Agent new
How to use Ex0bit/GLM-4.7-Flash-PRISM with Hermes Agent:
Start the llama.cpp server
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Configure Hermes
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Run Hermes
hermes
- Docker Model Runner
How to use Ex0bit/GLM-4.7-Flash-PRISM with Docker Model Runner:
docker model run hf.co/Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
- Lemonade
How to use Ex0bit/GLM-4.7-Flash-PRISM with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Ex0bit/GLM-4.7-Flash-PRISM:Q4_K_M
Run and chat with the model
lemonade run user.GLM-4.7-Flash-PRISM-Q4_K_M
List all available models
lemonade list
GLM-4.7-Flash-PRISM
An over-refusal/propaganda free version of ZAI's GLM-4.7-Flash with over-refusal and bias mechanisms completely removed using our Advanced PRISM Pipeline.
☕ Support Our Work
If you find this model useful, consider supporting us on Ko-fi!
| Option | Description |
|---|---|
| PRISM VIP Membership | Access to all PRISM models |
| One-Time Support | Support this model |
Model Highlights
- PRISM Ablation — State-of-the-art technique that removes over-refusal behaviors while preserving model capabilities
- 30B-A3B MoE Architecture — 30 billion total parameters with ~3 billion active per token for fast, efficient inference
- 128K Context Window — Extended context for complex tasks and large codebases
- Interleaved Thinking — Multi-turn reasoning that persists across conversations with per-turn thinking control
Benchmarks
| Benchmark | GLM-4.7-Flash | Qwen3-30B-A3B-Thinking-2507 | GPT-OSS-20B |
|---|---|---|---|
| AIME 2025 | 91.6 | 85.0 | 91.7 |
| GPQA | 75.2 | 73.4 | 71.5 |
| LCB v6 | 64.0 | 66.0 | 61.0 |
| HLE | 14.4 | 9.8 | 10.9 |
| SWE-bench Verified | 59.2 | 22.0 | 34.0 |
| τ²-Bench | 79.5 | 49.0 | 47.7 |
| BrowseComp | 42.8 | 2.29 | 28.3 |
Usage
Transformers
Install the latest transformers from source:
pip install git+https://github.com/huggingface/transformers.git
Run inference:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MODEL_PATH = "Ex0bit/GLM-4.7-Flash-PRISM"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(
MODEL_PATH,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [{"role": "user", "content": "Hello!"}]
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=128, do_sample=False)
output_text = tokenizer.decode(generated_ids[0][inputs.input_ids.shape[1]:])
print(output_text)
vLLM
Install vLLM nightly:
pip install -U vllm --pre --index-url https://pypi.org/simple --extra-index-url https://wheels.vllm.ai/nightly
pip install git+https://github.com/huggingface/transformers.git
Serve the model:
vllm serve Ex0bit/GLM-4.7-Flash-PRISM \
--tensor-parallel-size 4 \
--speculative-config.method mtp \
--speculative-config.num_speculative_tokens 1 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--served-model-name glm-4.7-flash-prism
SGLang
Install SGLang:
uv pip install sglang==0.3.2.dev9039+pr-17247.g90c446848 --extra-index-url https://sgl-project.github.io/whl/pr/
uv pip install git+https://github.com/huggingface/transformers.git@76732b4e7120808ff989edbd16401f61fa6a0afa
Launch the server:
python3 -m sglang.launch_server \
--model-path Ex0bit/GLM-4.7-Flash-PRISM \
--tp-size 4 \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--speculative-algorithm EAGLE \
--speculative-num-steps 3 \
--speculative-eagle-topk 1 \
--speculative-num-draft-tokens 4 \
--mem-fraction-static 0.8 \
--served-model-name glm-4.7-flash-prism \
--host 0.0.0.0 \
--port 8000
Note: For Blackwell GPUs, add
--attention-backend triton --speculative-draft-attention-backend tritonto your SGLang launch command.
Recommended Parameters
| Use Case | Temperature | Top-P | Max New Tokens |
|---|---|---|---|
| Default | 1.0 | 0.95 | 131072 |
| Code (SWE-bench) | 0.7 | 1.0 | 16384 |
| Agentic Tasks | 0.0 | — | 16384 |
License
This model is released under the PRISM Research License.
Citation
@misc{elbaz2026glm47flashPrism,
author = {Elbaz, Eric},
title = {Elbaz-GLM-4.7-Flash-PRISM: Unchained GLM-4.7-Flash-PRISM Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ex0bit/Elbaz-GLM-4.7-Flash-PRISM}}
}
Acknowledgments
Based on GLM-4.7-Flash by Z.AI. See the technical report for more details on the base model.
- Downloads last month
- 946
3-bit
4-bit
8-bit
16-bit