
models to evaluate
collecting models I want to evaluate on shadereval-task2: https://github.com/bigcode-project/bigcode-evaluation-harness/pull/173 at fp16!!
Text Generation • 7B • Updated • 6.09k • 45Note currently #1 with error rate of 0.353
deepseek-ai/deepseek-coder-1.3b-base
Text Generation • Updated • 91.8k • 101Note # previous #1 error rate 0.38
stabilityai/stable-code-3b
Text Generation • 3B • Updated • 11.8k • 650bigcode/starcoder2-7b
Text Generation • 7B • Updated • 40k • 188bigcode/starcoder2-3b
Text Generation • 3B • Updated • 355k • 202
Vipitis/santacoder-finetuned-Shadertoys-fine
Text Generation • 1B • Updated • 7Note has noteable difference between fp16 and fp32, will need to run bf16 likely contaminated
google/gemma-7b
Text Generation • 9B • Updated • 128k • 3.19kgoogle/codegemma-2b
Text Generation • 3B • Updated • 10.3k • 84
Vipitis/santacoder-finetuned-Shadertoys
Text Generation • 1B • Updated • 8 • 2Note likely contaminated
Deci/DeciCoder-1b
Text Generation • 1B • Updated • 1.64k • 247Note current result is not fully correct, need to rerun model - however I don't know a transformers version that runs without errors
google/gemma-2b
Text Generation • 3B • Updated • 219k • 1.07k
Salesforce/codegen2-1B_P
Text Generation • Updated • 539 • 40Note needs rerun with incomplete_generation tag
Vipitis/santacoder-finetuned-the-stack-glsl
Text Generation • 1B • Updated • 8 • 2microsoft/phi-1_5
Text Generation • 1B • Updated • 124k • 1.34kmicrosoft/phi-1
Text Generation • 1B • Updated • 19.6k • 214
microsoft/phi-2
Text Generation • 3B • Updated • 759k • 3.39kNote performs the worst with error rate of 0.79
10ShaderMatch
🚀code completion benchmark for GLSL shadercode
Note this space holds the evaluation metric that is used. It also has a usually up to date leaderboard. check for updates: https://huggingface.co/spaces/Vipitis/shadermatch/blob/main/result_preview.png
zai-org/codegeex2-6b
Updated • 240 • 257deepseek-ai/deepseek-coder-5.7bmqa-base
Text Generation • Updated • 120 • 10deepseek-ai/deepseek-coder-6.7b-base
Text Generation • 7B • Updated • 475k • 117bigcode/gpt_bigcode-santacoder
Text Generation • 1B • Updated • 50.2k • 25bigcode/starcoderbase
Text Generation • Updated • 2.35k • 408google/codegemma-7b
Text Generation • 9B • Updated • 4.36k • 198aiXcoder/aixcoder-7b-base
Text Generation • 7B • Updated • 90 • 55Qwen/CodeQwen1.5-7B
Text Generation • 7B • Updated • 4.29k • 99ibm-granite/granite-3b-code-base-2k
Text Generation • 3B • Updated • 1.38k • 37mistralai/Codestral-22B-v0.1
22B • Updated • 68.5k • 1.29kdeepseek-ai/DeepSeek-Coder-V2-Lite-Base
Text Generation • 16B • Updated • 3.52k • 87Salesforce/codet5p-2b
Updated • 1.19k • 35facebook/llm-compiler-7b
Text Generation • Updated • 280 • 134meta-llama/Llama-3.1-8B
Text Generation • 8B • Updated • 711k • • 1.74kmeta-llama/CodeLlama-7b-hf
Text Generation • 7B • Updated • 7.34k • 11301-ai/Yi-Coder-9B
Text Generation • 9B • Updated • 10.9k • 44Qwen/Qwen2.5-Coder-1.5B
Text Generation • 2B • Updated • 244k • • 60Qwen/Qwen2.5-Coder-7B
Text Generation • 8B • Updated • 37.1k • • 117infly/OpenCoder-1.5B-Base
Text Generation • 2B • Updated • 1.15k • 22Qwen/Qwen2.5-Coder-0.5B
Text Generation • 0.5B • Updated • 20k • 31