mradermacher commited on
Commit
08680e5
·
verified ·
1 Parent(s): 8e0e3e1

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -0
README.md CHANGED
@@ -1,6 +1,187 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  <!-- ### quantize_version: 2 -->
2
  <!-- ### output_tensor_quantised: 1 -->
3
  <!-- ### convert_type: hf -->
4
  <!-- ### vocab_type: -->
5
  <!-- ### tags: nicoboss -->
6
  weighted/imatrix quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003
3
+ datasets:
4
+ - gretelai/synthetic_text_to_sql
5
+ - HuggingFaceTB/cosmopedia
6
+ - teknium/OpenHermes-2.5
7
+ - Open-Orca/SlimOrca
8
+ - Severian/Internal-Knowledge-Map
9
+ - Open-Orca/OpenOrca
10
+ - cognitivecomputations/dolphin-coder
11
+ - databricks/databricks-dolly-15k
12
+ - yahma/alpaca-cleaned
13
+ - uonlp/CulturaX
14
+ - mwitiderrick/SwahiliPlatypus
15
+ - NexusAI-tddi/OpenOrca-tr-1-million-sharegpt
16
+ - Vezora/Open-Critic-GPT
17
+ - verifiers-for-code/deepseek_plans_test
18
+ - meta-math/MetaMathQA
19
+ - KbsdJames/Omni-MATH
20
+ - swahili
21
+ - Rogendo/English-Swahili-Sentence-Pairs
22
+ - ise-uiuc/Magicoder-Evol-Instruct-110K
23
+ - meta-math/MetaMathQA
24
+ - abacusai/ARC_DPO_FewShot
25
+ - abacusai/MetaMath_DPO_FewShot
26
+ - abacusai/HellaSwag_DPO_FewShot
27
+ - HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset
28
+ - HuggingFaceFW/fineweb
29
+ - occiglot/occiglot-fineweb-v0.5
30
+ - omi-health/medical-dialogue-to-soap-summary
31
+ - keivalya/MedQuad-MedicalQnADataset
32
+ - ruslanmv/ai-medical-dataset
33
+ - Shekswess/medical_llama3_instruct_dataset_short
34
+ - ShenRuililin/MedicalQnA
35
+ - virattt/financial-qa-10K
36
+ - PatronusAI/financebench
37
+ - takala/financial_phrasebank
38
+ - Replete-AI/code_bagel
39
+ - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW
40
+ - IlyaGusev/gpt_roleplay_realm
41
+ - rickRossie/bluemoon_roleplay_chat_data_300k_messages
42
+ - jtatman/hypnosis_dataset
43
+ - Hypersniper/philosophy_dialogue
44
+ - Locutusque/function-calling-chatml
45
+ - bible-nlp/biblenlp-corpus
46
+ - DatadudeDev/Bible
47
+ - Helsinki-NLP/bible_para
48
+ - HausaNLP/AfriSenti-Twitter
49
+ - aixsatoshi/Chat-with-cosmopedia
50
+ - xz56/react-llama
51
+ - BeIR/hotpotqa
52
+ - YBXL/medical_book_train_filtered
53
+ - SkunkworksAI/reasoning-0.01
54
+ - THUDM/LongWriter-6k
55
+ - WhiteRabbitNeo/WRN-Chapter-1
56
+ - WhiteRabbitNeo/Code-Functions-Level-Cyber
57
+ - WhiteRabbitNeo/Code-Functions-Level-General
58
+ language:
59
+ - en
60
+ - sw
61
+ - ig
62
+ - so
63
+ - es
64
+ - ca
65
+ - xh
66
+ - zu
67
+ - ha
68
+ - tw
69
+ - af
70
+ - hi
71
+ - bm
72
+ - su
73
+ library_name: transformers
74
+ license: mit
75
+ quantized_by: mradermacher
76
+ tags:
77
+ - mergekit
78
+ - merge
79
+ - Mistral_Star
80
+ - Mistral_Quiet
81
+ - Mistral
82
+ - Mixtral
83
+ - Question-Answer
84
+ - Token-Classification
85
+ - Sequence-Classification
86
+ - SpydazWeb-AI
87
+ - chemistry
88
+ - biology
89
+ - legal
90
+ - code
91
+ - climate
92
+ - medical
93
+ - LCARS_AI_StarTrek_Computer
94
+ - text-generation-inference
95
+ - chain-of-thought
96
+ - tree-of-knowledge
97
+ - forest-of-thoughts
98
+ - visual-spacial-sketchpad
99
+ - alpha-mind
100
+ - knowledge-graph
101
+ - entity-detection
102
+ - encyclopedia
103
+ - wikipedia
104
+ - stack-exchange
105
+ - Reddit
106
+ - Cyber-series
107
+ - MegaMind
108
+ - Cybertron
109
+ - SpydazWeb
110
+ - Spydaz
111
+ - LCARS
112
+ - star-trek
113
+ - mega-transformers
114
+ - Mulit-Mega-Merge
115
+ - Multi-Lingual
116
+ - Afro-Centric
117
+ - African-Model
118
+ - Ancient-One
119
+ ---
120
+ ## About
121
+
122
  <!-- ### quantize_version: 2 -->
123
  <!-- ### output_tensor_quantised: 1 -->
124
  <!-- ### convert_type: hf -->
125
  <!-- ### vocab_type: -->
126
  <!-- ### tags: nicoboss -->
127
  weighted/imatrix quants of https://huggingface.co/LeroyDyer/SpydazWeb_AI_LCARS_Humanization_003
128
+
129
+ <!-- provided-files -->
130
+ static quants are available at https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-GGUF
131
+ ## Usage
132
+
133
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
134
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
135
+ more details, including on how to concatenate multi-part files.
136
+
137
+ ## Provided Quants
138
+
139
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
140
+
141
+ | Link | Type | Size/GB | Notes |
142
+ |:-----|:-----|--------:|:------|
143
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
144
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
145
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
146
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
147
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
148
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
149
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
150
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
151
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
152
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
153
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
154
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
155
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
156
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
157
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
158
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
159
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
160
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
161
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
162
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
163
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
164
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
165
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
166
+ | [GGUF](https://huggingface.co/mradermacher/SpydazWeb_AI_LCARS_Humanization_003-i1-GGUF/resolve/main/SpydazWeb_AI_LCARS_Humanization_003.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
167
+
168
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
169
+ types (lower is better):
170
+
171
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
172
+
173
+ And here are Artefact2's thoughts on the matter:
174
+ https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
175
+
176
+ ## FAQ / Model Request
177
+
178
+ See https://huggingface.co/mradermacher/model_requests for some answers to
179
+ questions you might have and/or if you want some other model quantized.
180
+
181
+ ## Thanks
182
+
183
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
184
+ me use its servers and providing upgrades to my workstation to enable
185
+ this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
186
+
187
+ <!-- end -->