hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 Text Generation • 59B • Updated Sep 13, 2024 • 8.54k • 36
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 Text Generation • 2B • Updated Aug 7, 2024 • 280k • 68
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 Text Generation • 11B • Updated Aug 7, 2024 • 154k • 101
UCLA-EMC/Meta-Llama-3.1-8B-Instruct-AWQ-INT4-32-2.17B Text Generation • 2B • Updated Aug 30, 2024 • 17 • 1
hugging-quants/Mixtral-8x7B-Instruct-v0.1-AWQ-INT4 Text Generation • 6B • Updated Oct 7, 2024 • 11.3k
ibnzterrell/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4 Text Generation • 11B • Updated Dec 7, 2024 • 1.77k • 5
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-asym Text Generation • 0.3B • Updated Nov 27, 2024 • 17
fbaldassarri/TinyLlama_TinyLlama_v1.1-autoawq-int4-gs128-sym Text Generation • 0.3B • Updated Nov 27, 2024 • 17
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-asym Text Generation • 0.0B • Updated Nov 27, 2024 • 13
fbaldassarri/EleutherAI_pythia-14m-autoawq-int4-gs128-sym Text Generation • 0.0B • Updated Nov 27, 2024 • 14
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-asym Text Generation • 0.0B • Updated Nov 27, 2024 • 13
fbaldassarri/EleutherAI_pythia-31m-autoawq-int4-gs128-sym Text Generation • 0.0B • Updated Nov 27, 2024 • 31
fbaldassarri/EleutherAI_pythia-70m-deduped-autoawq-int4-gs128-asym Text Generation • 0.1B • Updated Nov 27, 2024 • 76