vicky4s4s commited on
Commit
6065e6f
·
verified ·
1 Parent(s): 5ea9fe2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -21,8 +21,6 @@ base_model:
21
 
22
  ## Quantized Model Information
23
 
24
- > [!IMPORTANT]
25
- > This repository is an AWQ 4-bit quantized version of [`meta-llama/Llama-3.3-70B-Instruct`](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct), originally released by Meta AI.
26
 
27
  This model was quantized using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
28
 
 
21
 
22
  ## Quantized Model Information
23
 
 
 
24
 
25
  This model was quantized using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from FP16 down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
26