RedHatAI/Phi-3-medium-128k-instruct-quantized.w8a8 Text Generation • 14B • Updated Oct 9, 2024 • 53 • 2
RedHatAI/Phi-3-mini-128k-instruct-quantized.w4a16 Text Generation • 0.7B • Updated Oct 9, 2024 • 7 • 1
RedHatAI/Phi-3-medium-128k-instruct-quantized.w4a16 Text Generation • 2B • Updated Oct 9, 2024 • 1.29k • 3
nm-testing/Meta-Llama-3-8B-Instruct-W8A8-FP8-Channelwise-compressed-tensors Text Generation • 8B • Updated Oct 9, 2024 • 3
nm-testing/Meta-Llama-3-8B-Instruct-Non-Uniform-compressed-tensors Text Generation • 8B • Updated Oct 9, 2024 • 3
nm-testing/Meta-Llama-3-8B-Instruct-nonuniform-test Text Generation • 8B • Updated Oct 9, 2024 • 6.54k
RedHatAI/Mistral-7B-Instruct-v0.3-quantized.w8a8 Text Generation • 7B • Updated Oct 9, 2024 • 117 • 2
nm-testing/Meta-Llama-3-405B-Instruct-Up-Merge-fp8 Text Generation • 405B • Updated Oct 9, 2024 • 4 • 4
nm-testing/Llama-2-70b-chat-hf-W8A8-Dynamic-Per-Token Text Generation • 69B • Updated Oct 9, 2024 • 6
nm-testing/Meta-Llama-3-8B-Instruct-W4A16-ACTORDER-compressed-tensors-test Text Generation • 2B • Updated Oct 9, 2024 • 3
nm-testing/Meta-Llama-3-70B-Instruct-W8A8-Dynamic-Per-Token-test Text Generation • 71B • Updated Oct 9, 2024 • 5
nm-testing/Meta-Llama-3-70B-Instruct-W8A8-Dynamic-Per-Token Text Generation • 71B • Updated Oct 9, 2024 • 5
RedHatAI/Meta-Llama-3.1-70B-Instruct-FP8-dynamic Text Generation • 71B • Updated Oct 19, 2024 • 3.27k • 7
RedHatAI/Meta-Llama-3.1-405B-Instruct-FP8-dynamic Text Generation • 406B • Updated Oct 19, 2024 • 3.4k • 15
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a16 Text Generation • 3B • Updated Oct 23, 2024 • 2.45k • 10
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w8a8 Text Generation • 8B • Updated 9 days ago • 45.7k • 17