-
-
-
-
-
-
Inference Providers
Active filters:
vllm
MikeRoz/mistralai_Mistral-Large-Instruct-2411-6.0bpw-h6-exl2
Updated
•
11
BigHuggyD/Mistral-Large-Instruct-2411_exl2_4.0bpw_h6
wolfram/Mistral-Large-Instruct-2411-2.75bpw-h6-exl2
Updated
•
8
•
2
BigHuggyD/Mistral-Large-Instruct-2411_exl2_5.0bpw_h6
MikeRoz/mistralai_Mistral-Large-Instruct-2411-3.5bpw-h6-exl2
TechxGenus/Mistral-Large-Instruct-2411-GPTQ
17B
•
Updated
•
31
•
2
BigHuggyD/Mistral-Large-Instruct-2411_exl2_6.0bpw_h6
MikeRoz/mistralai_Mistral-Large-Instruct-2411-2.25bpw-h6-exl2
bullerwins/Mistral-Large-Instruct-2411-exl2_3.0bpw
Updated
•
11
•
1
bullerwins/Mistral-Large-Instruct-2411-exl2_3.5bpw
Updated
•
10
bullerwins/Mistral-Large-Instruct-2411-exl2_6.0bpw
Updated
•
10
bullerwins/Mistral-Large-Instruct-2411-exl2_8.0bpw
Updated
•
11
bullerwins/Mistral-Large-Instruct-2411-exl2_5.5bpw
tokoin/Ministral-8B-Instruct-2410-Q4_0-GGUF
8B
•
Updated
•
4
RedHatAI/Sparse-Llama-3.1-8B-2of4
Text Generation
•
8B
•
Updated
•
40
•
62
RedHatAI/Sparse-Llama-3.1-8B-ultrachat_200k-2of4-quantized.w4a16
Text Generation
•
2B
•
Updated
•
5
•
3
BigHuggyD/Mistral-Large-Instruct-2411_exl2_7.0bpw_h8
BigHuggyD/Mistral-Large-Instruct-2411_exl2_8.0bpw_h8
ZeroXClem/L3-Aspire-Heart-Matrix-8B
Text Generation
•
8B
•
Updated
•
6
•
4
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q8_0-GGUF
Text Generation
•
8B
•
Updated
•
3
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q6_K-GGUF
Text Generation
•
8B
•
Updated
•
1
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q5_K_M-GGUF
Text Generation
•
8B
•
Updated
•
5
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q5_0-GGUF
Text Generation
•
8B
•
Updated
•
1
•
1
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_K_M-GGUF
Text Generation
•
8B
•
Updated
•
8
•
1
ZeroXClem/L3-Aspire-Heart-Matrix-8B-Q4_0-GGUF
Text Generation
•
8B
•
Updated
•
4
mradermacher/L3.1-Aspire-Heart-Matrix-8B-GGUF
8B
•
Updated
•
148
•
1
mradermacher/L3.1-Aspire-Heart-Matrix-8B-i1-GGUF
8B
•
Updated
•
89
•
1
RedHatAI/Sparse-Llama-3.1-8B-ultrachat_200k-2of4
Text Generation
•
8B
•
Updated
•
8
•
1
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4
Text Generation
•
8B
•
Updated
•
6
•
1
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-quantized.w4a16
Text Generation
•
2B
•
Updated
•
6