Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

RedHatAI
/
Meta-Llama-3.1-8B-Instruct-FP8

Text Generation
Transformers
Safetensors
llama
fp8
vllm
conversational
text-generation-inference
compressed-tensors
Model card Files Files and versions
xet
Community
6
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

Not able to use it with TGI

1
#5 opened 10 months ago by
Alokgupta96

Does this model only work on CUDA devices with compute capability >= 9.0 or 8.9/ROCm MI300+?

1
#4 opened 10 months ago by
jcfasi

How to fast inference with FP8

1
#2 opened 10 months ago by
CCRss
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs