ArtusDev's picture
Update README.md
bc7b583 verified
metadata
base_model: nvidia/Llama-3.3-Nemotron-70B-Reward-Principle
base_model_relation: quantized
quantized_by: ArtusDev
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
inference: false
fine-tuning: false
language:
  - en
tags:
  - nvidia
  - llama3.3
  - exl3
datasets:
  - nvidia/HelpSteer3
library_name: transformers

ArtusDev/nvidia_Llama-3.3-Nemotron-70B-Reward-Principle-EXL3

EXL3 quants of nvidia/Llama-3.3-Nemotron-70B-Reward-Principle using exllamav3 for quantization.

How to Download and Use Quants

You can download quants by targeting specific size using the Hugging Face CLI.

Click for download commands
1. Install huggingface-cli:
pip install -U "huggingface_hub[cli]"
2. Download a specific quant:
huggingface-cli download ArtusDev/nvidia_Llama-3.3-Nemotron-70B-Reward-Principle-EXL3 --revision "5.0bpw_H6" --local-dir ./

EXL3 quants can be run with any inference client that supports EXL3, such as TabbyAPI. Refer to documentation for set up instructions.

Quant Requests

See EXL community hub for request guidelines.

Acknowledgements

Made possible with cloud compute from lium.io