Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

SpongeEngine
/
QwQ-32B-abliterated-i1-GGUF

GGUF
English
SpongeQuant
i1-GGUF
imatrix
conversational
Model card Files Files and versions Community
QwQ-32B-abliterated-i1-GGUF
Ctrl+K
Ctrl+K
  • 1 contributor
History: 11 commits
dclipca's picture
dclipca
Upload folder using huggingface_hub
eff3598 verified 2 months ago
  • .gitattributes
    2.29 kB
    Upload folder using huggingface_hub 2 months ago
  • QwQ-32B-abliterated.imatrix.dat
    15 MB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • README.md
    2.58 kB
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ1_M.gguf
    7.93 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ1_S.gguf
    7.27 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ2_M.gguf
    11.3 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ2_S.gguf
    10.4 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ2_XS.gguf
    9.96 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ2_XXS.gguf
    9.03 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ3_XS.gguf
    13.7 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-IQ3_XXS.gguf
    12.8 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-TQ1_0.gguf
    7.67 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • qwq-32b-abliterated-i1-TQ2_0.gguf
    9.13 GB
    LFS
    Upload folder using huggingface_hub 2 months ago
  • upload_success.txt
    18 Bytes
    Upload folder using huggingface_hub 2 months ago