Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

ayyoob-cis
/
vicuna-160m-gptq

Text Generation
Transformers
llama
4-bit precision
gptq
Model card Files Files and versions Community
vicuna-160m-gptq
Ctrl+K
Ctrl+K
  • 1 contributor
History: 2 commits
ayyoob-cis's picture
ayyoob-cis
AutoGPTQ model for vicuna-160m-gptq: 4bits, gr128, desc_act=False
788d390 verified about 1 year ago
  • .gitattributes
    1.52 kB
    initial commit about 1 year ago
  • README.md
    31 Bytes
    initial commit about 1 year ago
  • config.json
    1.04 kB
    AutoGPTQ model for vicuna-160m-gptq: 4bits, gr128, desc_act=False about 1 year ago
  • gptq_model-4bit-128g.safetensors
    158 MB
    LFS
    AutoGPTQ model for vicuna-160m-gptq: 4bits, gr128, desc_act=False about 1 year ago
  • quantize_config.json
    266 Bytes
    AutoGPTQ model for vicuna-160m-gptq: 4bits, gr128, desc_act=False about 1 year ago