QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 29 items β’ Updated 3 days ago
view post Post 1595 Qwen 2.5 Coder 32b is a dime among nickels. Amazing performance for its size, so much so it earns a spot in the duo leaderboard. The day of small models is here. onekq-ai/WebApp1K-models-leaderboard Qwen/Qwen2.5-Coder-32B-Instruct See translation π 4 4 + Reply
Ollama-ready Coding Models Collection For inference. CPU is enough for both quantization and inference. β’ 32 items β’ Updated 3 days ago β’ 2
QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 29 items β’ Updated 3 days ago
QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 29 items β’ Updated 3 days ago
Ollama-ready Coding Models Collection For inference. CPU is enough for both quantization and inference. β’ 32 items β’ Updated 3 days ago β’ 2
QLora-ready Coding Models Collection For Finetuning. GPU is needed for both quantization and inference. β’ 29 items β’ Updated 3 days ago
Ollama-ready Coding Models Collection For inference. CPU is enough for both quantization and inference. β’ 32 items β’ Updated 3 days ago β’ 2