mradermacher's picture
auto-patch README.md
0e72b8e verified
metadata
base_model: GreenerPastures/Golden-Curry-12B
datasets:
  - Mielikki/Erebus-87k
  - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
  - NewEden/Kalo-Opus-Instruct-22k-Refusal-Murdered
  - Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
  - NewEden/Gryphe-Sonnet-3.5-35k-Subset
  - Nitral-AI/GU_Instruct-ShareGPT
  - Nitral-AI/Medical_Instruct-ShareGPT
  - AquaV/Resistance-Sharegpt
  - AquaV/US-Army-Survival-Sharegpt
  - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
  - Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
  - ResplendentAI/bluemoon
  - hardlyworking/openerotica-freedomrp-sharegpt-system
  - MinervaAI/Aesir-Preview
  - anthracite-core/c2_logs_32k_v1.1
  - Nitral-AI/Creative_Writing-ShareGPT
  - PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT
  - NewEden/Opus-accepted-hermes-rejected-shuffled
language:
  - en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher

About

static quants of https://huggingface.co/GreenerPastures/Golden-Curry-12B

weighted/imatrix quants are available at https://huggingface.co/mradermacher/Golden-Curry-12B-i1-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 4.9
GGUF Q3_K_S 5.6
GGUF Q3_K_M 6.2 lower quality
GGUF Q3_K_L 6.7
GGUF IQ4_XS 6.9
GGUF Q4_K_S 7.2 fast, recommended
GGUF Q4_K_M 7.6 fast, recommended
GGUF Q5_K_S 8.6
GGUF Q5_K_M 8.8
GGUF Q6_K 10.2 very good quality
GGUF Q8_0 13.1 fast, best quality

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.