LlamaEdge compatible quants for SmolVLM2 models.
AI & ML interests
Run open source LLMs across CPU and GPU without changing the binary in Rust and Wasm locally!
Recent Activity
View all activity
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 15.3k • 12 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 402 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 4.12k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 6.56k • 9
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 280 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 192 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 376 • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 450
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 383 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 189 • 3 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 494 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 277 • 3
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 284 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 240 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 336 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 235
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 213 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 78 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 141 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 423
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 219 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 171 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 75 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 398
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 272 • 1 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 137 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 370 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 167
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 773 • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.3k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 256 • 12 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 248 • 13
LlamaEdge compatible quants for SmolVLM2 models.
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for Gemma-3-it models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 284 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 240 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 336 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 235
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 15.3k • 12 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 402 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 4.12k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 6.56k • 9
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 213 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 78 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 141 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 423
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 219 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 171 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 75 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 398
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 280 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 192 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 376 • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 450
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 272 • 1 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 137 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 370 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 167
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 383 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 189 • 3 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 494 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 277 • 3
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 773 • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.3k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 256 • 12 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 248 • 13