Hunyuan-MT-7B

Model creator: tencent
Original model: tencent/Hunyuan-MT-7B
GGUF quantization: provided by olegshulyakov using llama.cpp

Special thanks

πŸ™ Special thanks to Georgi Gerganov and the whole team working on llama.cpp for making all of this possible.

Use with Ollama

ollama run "hf.co/olegshulyakov/Hunyuan-MT-7B-GGUF:Q6_K"

Use with LM Studio

lms load "olegshulyakov/Hunyuan-MT-7B-GGUF"

Use with llama.cpp CLI

llama-cli --hf "olegshulyakov/Hunyuan-MT-7B-GGUF:Q6_K" -p "The meaning to life and the universe is"

Use with llama.cpp Server:

llama-server --hf "olegshulyakov/Hunyuan-MT-7B-GGUF:Q6_K" -c 4096
Downloads last month
42
GGUF
Model size
7.5B params
Architecture
hunyuan-dense
Hardware compatibility
Log In to view the estimation

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for olegshulyakov/Hunyuan-MT-7B-GGUF

Quantized
(16)
this model