T-pro-it-2.0-GGUF
π¨ Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.
This repository contains T-pro-it-2.0 converted to the GGUF format with
llama.cpp.
See the original BF16 model here: t-tech/T-pro-it-2.0.
π Benchmarks
TBD
Available quantisations
Recommendation: choose the highest-quality quantisation that fits your hardware (VRAM / RAM).
Filename (β -gguf ) |
Quant method | Bits | Size (GB) |
---|---|---|---|
t-pro-it-2.0-q2_k |
Q2_K | 2 | 12.3 |
t-pro-it-2.0-iq3_xs |
IQ3_XS | 3 | 13.7 |
t-pro-it-2.0-iq3_s |
IQ3_S | 3 | 14.4 |
t-pro-it-2.0-q3_k_s |
Q3_K_S | 3 | 14.4 |
t-pro-it-2.0-q3_k_m |
Q3_K_M | 3 | 16.0 |
t-pro-it-2.0-iq4_xs |
IQ4_XS | 4 | 17.9 |
t-pro-it-2.0-q4_k_s |
Q4_K_S | 4 | 18.8 |
t-pro-it-2.0-iq4_nl |
IQ4_NL | 4 | 18.8 |
t-pro-it-2.0-q4_0 |
Q4_0 | 4 | 18.6 |
t-pro-it-2.0-q4_k_m |
Q4_K_M | 4 | 19.8 |
t-pro-it-2.0-q5_k_s |
Q5_K_S | 5 | 22.6 |
t-pro-it-2.0-q5_0 |
Q5_0 | 5 | 22.6 |
t-pro-it-2.0-q5_k_m |
Q5_K_M | 5 | 23.2 |
t-pro-it-2.0-q6_k |
Q6_K | 6 | 26.9 |
t-pro-it-2.0-q8_0 |
Q8_0 | 8 | 34.8 |
Size figures assume no GPU off-loading. Off-loading lowers RAM usage and uses VRAM instead.
Quickstart
llama.cpp
Check out our llama.cpp documentation for more usage guide.
We advise you to clone llama.cpp
and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository llama.cpp
.
./llama-cli -hf t-tech/T-pro-it-2.0-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift
ollama
Check out our ollama documentation for more usage guide.
You can run Qwen3 with one command:
ollama run hf.co/t-tech/T-pro-it-2.0-GGUF:Q8_0
Switching Between Thinking and Non-Thinking Mode
You can add /think
and /no_think
to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
- Downloads last month
- 30,149
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit