Llama-3.3-Nemotron-Super-49B-v1-FP8-Dynamic
FP8-Dynamic quantization of https://huggingface.co/nvidia/Llama-3_3-Nemotron-Super-49B-v1
Created with llmcompressor using the following code:
from transformers import AutoTokenizer, AutoModelForCausalLM
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import QuantizationModifier
MODEL_ID = "/models/Llama-3_3-Nemotron-Super-49B-v1"
model = AutoModelForCausalLM.from_pretrained(
MODEL_ID, device_map="auto", torch_dtype="auto", trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID)
# Configure the simple PTQ quantization
recipe = QuantizationModifier(
targets="Linear", scheme="FP8_DYNAMIC", ignore=["lm_head"])
# Apply the quantization algorithm.
oneshot(model=model, recipe=recipe, trust_remote_code_model=True)
# Save the model.
SAVE_DIR = MODEL_ID + "-FP8-Dynamic"
model.save_pretrained(SAVE_DIR)
tokenizer.save_pretrained(SAVE_DIR)
To run it with vllm, use the latest version (0.8.2 as of now) and add the following PR: https://github.com/vllm-project/vllm/pull/15008
Make sure to read the original model's README for further guidance.
- Downloads last month
- 7,307
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Ithanil/Llama-3_3-Nemotron-Super-49B-v1-FP8-Dynamic
Base model
nvidia/Llama-3_3-Nemotron-Super-49B-v1