π§ SmolLM3β3BβBase GGUF Quantized
This is a quantized GGUF version of HuggingFaceTB/SmolLM3-3B-Base, optimized for fast, local inference using llama.cpp
, llm-gguf
, or Ollama
.
π Original Model
For training details, tokenizer, chat format, and architecture: π SmolLM3β3BβBase on Hugging Face
- Downloads last month
- 57
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
32-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for yasserrmd/smollm3-gguf
Base model
HuggingFaceTB/SmolLM3-3B-Base
Finetuned
HuggingFaceTB/SmolLM3-3B