-
-
-
-
-
-
Inference Providers
Active filters:
1bit
legraphista/Qwen2.5-3B-Instruct-IMat-GGUF
Text Generation
•
3B
•
Updated
•
2.98k
legraphista/Qwen2.5-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.56k
legraphista/Qwen2.5-14B-Instruct-IMat-GGUF
Text Generation
•
15B
•
Updated
•
3.94k
legraphista/Qwen2.5-32B-Instruct-IMat-GGUF
Text Generation
•
33B
•
Updated
•
3.13k
legraphista/Qwen2.5-Coder-1.5B-Instruct-IMat-GGUF
Text Generation
•
2B
•
Updated
•
2.31k
legraphista/Qwen2.5-Math-1.5B-Instruct-IMat-GGUF
Text Generation
•
2B
•
Updated
•
2.15k
legraphista/Qwen2.5-Coder-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
2.03k
legraphista/Qwen2.5-Math-7B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
1.92k
legraphista/Qwen2.5-72B-Instruct-IMat-GGUF
Text Generation
•
73B
•
Updated
•
3.14k
legraphista/Llama-3.2-1B-Instruct-IMat-GGUF
Text Generation
•
1B
•
Updated
•
2.27k
legraphista/Llama-3.2-3B-Instruct-IMat-GGUF
Text Generation
•
3B
•
Updated
•
2.66k
•
1
mradermacher/Bitnet-M7-70m-GGUF
0.1B
•
Updated
•
133
mradermacher/Bitnet-M7-70m-i1-GGUF
0.1B
•
Updated
•
324