Llama-3.3-70B-Instruct-abliterated AWQ 4-Bit Quantized Version
This repository provides the AWQ 4-bit quantized version of meta-llama/Llama-3.3-70B-Instruct-abliterated
, originally developed by Meta AI.
- Downloads last month
- 111
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for jiangchengchengNLP/llama3.3-70B-instruct-abliterated-awq
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct