Qwen3-30B-A3B-pruned-GGUF / scores /Qwen3-30B-A3B-pruned-F16.tqa
eaddario's picture
Generate Perplexity, KLD, ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores
be45168 verified
build: 5553 (c7e0a205) with cc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5) for x86_64-amazon-linux
llama_model_load_from_file_impl: using device CUDA0 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA1 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA2 (Tesla T4) - 14810 MiB free
llama_model_load_from_file_impl: using device CUDA3 (Tesla T4) - 14810 MiB free
llama_model_loader: loaded meta data with 40 key-value pairs and 579 tensors from ./Qwen3-30B-A3B-F16.gguf (version GGUF V3 (latest))
Final result: 31.2000 +/- 1.6929
Random chance: 19.8992 +/- 1.4588
llama_perf_context_print: load time = 13704.38 ms
llama_perf_context_print: prompt eval time = 426482.94 ms / 49696 tokens ( 8.58 ms per token, 116.53 tokens per second)
llama_perf_context_print: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_perf_context_print: total time = 433376.19 ms / 49697 tokens