-
-
-
-
-
-
Inference Providers
Active filters:
quantllm
codewithdark/Llama-3.2-3B-4bit
3B
•
Updated
•
12
codewithdark/Llama-3.2-3B-GGUF-4bit
3B
•
Updated
•
4
codewithdark/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
48
QuantLLM/Llama-3.2-3B-4bit-mlx
Text Generation
•
3B
•
Updated
•
20
QuantLLM/Llama-3.2-3B-2bit-mlx
Text Generation
•
3B
•
Updated
•
19
QuantLLM/Llama-3.2-3B-8bit-mlx
Text Generation
•
3B
•
Updated
•
56
QuantLLM/Llama-3.2-3B-5bit-mlx
Text Generation
•
3B
•
Updated
•
73
QuantLLM/Llama-3.2-3B-5bit-gguf
3B
•
Updated
•
7
QuantLLM/Llama-3.2-3B-2bit-gguf
3B
•
Updated
•
9
QuantLLM/functiongemma-270m-it-8bit-gguf
0.3B
•
Updated
•
15
•
1
QuantLLM/functiongemma-270m-it-4bit-gguf
0.3B
•
Updated
•
22
QuantLLM/functiongemma-270m-it-4bit-mlx
Text Generation
•
0.3B
•
Updated
•
51