-
-
-
-
-
-
Inference Providers
Active filters:
codeqwen
DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
Text Generation
•
42B
•
Updated
•
281
•
13
Qwen/Qwen2.5-Coder-32B-Instruct
Text Generation
•
33B
•
Updated
•
129k
•
•
1.93k
Qwen/Qwen2.5-Coder-7B-Instruct
Text Generation
•
8B
•
Updated
•
924k
•
•
542
Qwen/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
29.2k
•
136
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-i1-GGUF
42B
•
Updated
•
7.04k
•
2
Qwen/Qwen2.5-Coder-7B
Text Generation
•
8B
•
Updated
•
35.6k
•
•
122
Qwen/Qwen2.5-Coder-1.5B-Instruct
Text Generation
•
2B
•
Updated
•
167k
•
•
84
Qwen/Qwen2.5-Coder-0.5B-Instruct
Text Generation
•
0.5B
•
Updated
•
29k
•
52
Qwen/Qwen2.5-Coder-3B-Instruct
Text Generation
•
3B
•
Updated
•
42.3k
•
•
76
bartowski/Qwen2.5.1-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
4.33k
•
95
Qwen/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
•
3B
•
Updated
•
6.88k
•
42
lmstudio-community/Qwen2.5-Coder-14B-Instruct-MLX-8bit
Text Generation
•
4B
•
Updated
•
38.7k
•
1
unsloth/Qwen2.5-Coder-32B-Instruct-bnb-4bit
Text Generation
•
18B
•
Updated
•
5.1k
•
5
unsloth/Qwen2.5-Coder-14B-Instruct-128K-GGUF
15B
•
Updated
•
2.03k
•
29
RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic
Text Generation
•
15B
•
Updated
•
32
•
1
DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation
•
0.8B
•
Updated
•
27.2k
•
6
DavidAU/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M
Text Generation
•
42B
•
Updated
•
15
•
3
nightmedia/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER-1m-dwq4-mlx
Text Generation
•
53B
•
Updated
•
110
•
1
mradermacher/Qwen3-Coder-42B-A3B-Instruct-TOTAL-RECALL-MASTER-CODER-M-i1-GGUF
42B
•
Updated
•
551
•
2
mradermacher/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-GGUF
42B
•
Updated
•
7.69k
•
5
mradermacher/Qwen3-MOE-4x0.6B-2.4B-Writing-Thunder-V1.2-GGUF
2B
•
Updated
•
667
•
1
nightmedia/Qwen3-Yoyo-V3-42B-A3B-Thinking-Total-Recall-qx64-hi-mlx
Text Generation
•
42B
•
Updated
•
70
•
1
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4
Text Generation
•
2B
•
Updated
•
7
study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int8
Text Generation
•
2B
•
Updated
•
15
•
1
lmstudio-community/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
1.97k
•
20
bartowski/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
8B
•
Updated
•
12.6k
•
30
Qwen/Qwen2.5-Coder-1.5B
Text Generation
•
2B
•
Updated
•
294k
•
•
63
Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
3.35k
•
25
bartowski/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
2.64k
•
9
lmstudio-community/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
2B
•
Updated
•
509
•
2