Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
aws-neuron
/
optimum-neuron-cache
like
26
Follow
AWS Inferentia and Trainium
138
License:
apache-2.0
Model card
Files
Files and versions
xet
Community
574
6b536dc
optimum-neuron-cache
/
neuronxcc-2.15.143.0+e39249ad
/
0_REGISTRY
/
0.0.27
/
inference
/
llama
/
meta-llama
17.4 kB
4 contributors
History:
18 commits
dacorvo
HF Staff
Synchronizing local compiler cache.
81238a0
verified
10 months ago
Llama-2-13b-hf
Synchronizing local compiler cache.
10 months ago
Llama-2-70b-chat-hf
Synchronizing local compiler cache.
10 months ago
Llama-2-7b-hf
Synchronizing local compiler cache.
10 months ago
Llama-3.1-70B-Instruct
Synchronizing local compiler cache.
10 months ago
Llama-3.2-1B
Synchronizing local compiler cache.
10 months ago
Llama-3.2-3B
Synchronizing local compiler cache.
10 months ago
Meta-Llama-3-70B
Synchronizing local compiler cache.
10 months ago
Meta-Llama-3-8B
Synchronizing local compiler cache.
10 months ago
Meta-Llama-3.1-8B
Synchronizing local compiler cache.
10 months ago