
Thinking / Reasoning Models - Reg and MOEs.
QwQ,DeepSeek, EXONE, DeepHermes, and others "thinking/reasoning" AIs / LLMs in regular model type, MOE (mix of experts), and Hybrid model formats.
- 35B • Updated • 9.57k • 61
DavidAU/Qwen3-30B-A6B-16-Extreme
Text Generation • 31B • Updated • 859 • 55DavidAU/Reka-Flash-3-21B-Reasoning-Uncensored-MAX-NEO-Imatrix-GGUF
Text Generation • 21B • Updated • 1.84k • 51DavidAU/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-gguf
Text Generation • 17B • Updated • 535 • 23DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf
Text Generation • 31B • Updated • 12.3k • 23
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-GGUF
Text Generation • 25B • Updated • 1.44k • 17Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B-GGUF
Text Generation • 45B • Updated • 226 • 11DavidAU/DeepSeek-V2-Grand-Horror-SMB-R1-Distill-Llama-3.1-Uncensored-16.5B-GGUF
Text Generation • 17B • Updated • 496 • 12DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf
Text Generation • 14B • Updated • 749 • 10DavidAU/Llama-3.1-DeepHermes-R1-Reasoning-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • 8B • Updated • 1.07k • 16DavidAU/DeepSeek-BlackRoot-R1-Distill-Llama-3.1-8B-GGUF
Text Generation • 8B • Updated • 171 • 9DavidAU/DeepSeek-Grand-Horror-SMB-R1-Distill-Llama-3.1-16B-GGUF
Text Generation • 16B • Updated • 234 • 11
DavidAU/DeepThought-MOE-8X3B-R1-Llama-3.2-Reasoning-18B-gguf
Text Generation • 18B • Updated • 140 • 8Note MOE - Mixture of Experts version. This model has 8 times the power of a standard 3B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-QwQ-35B-Eureka-Cubed-gguf
35B • Updated • 49 • 5DavidAU/Llama-3.1-DeepSeek-8B-DarkIdol-Instruct-1.2-Uncensored-GGUF
Text Generation • 8B • Updated • 429 • 6
DavidAU/Qwen2.5-MOE-6x1.5B-DeepSeek-Reasoning-e32-8.71B-gguf
Text Generation • 9B • Updated • 33 • 5Note MOE - Mixture of Experts version. This model has 6 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-MOE-2X1.5B-DeepSeek-Uncensored-Censored-4B-gguf
Text Generation • 4B • Updated • 1.33k • 6Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 1.5B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/Qwen2.5-MOE-2X7B-DeepSeek-Abliterated-Censored-19B-gguf
Text Generation • 19B • Updated • 209 • 4Note MOE - Mixture of Experts version. This model has 2 times the power of a standard 7B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Mad-Scientist-24B-GGUF
Text Generation • 25B • Updated • 33 • 3Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
DavidAU/DeepHermes-3-Llama-3-8B-Preview-16.5B-Brainstorm-gguf
Text Generation • 17B • Updated • 51 • 3DavidAU/DeepSeek-R1-Distill-Qwen-25.5B-Brainstorm-gguf
Text Generation • 26B • Updated • 85 • 3
DavidAU/Deep-Reasoning-Llama-3.2-10pack-f16-gguf
Text Generation • 3B • Updated • 76 • 1Note Links to all 10 models in GGUF (regular and Imatrix) format also on this page.
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-13.7B-gguf
Text Generation • 14B • Updated • 13 • 1DavidAU/Deep-Reasoning-Llama-3.2-Hermes-3-3B
Text Generation • 3B • Updated • 47 • 1DavidAU/Deep-Reasoning-Llama-3.2-JametMini-3B-MK.III
Text Generation • 3B • Updated • 4 • 1DavidAU/Deep-Reasoning-Llama-3.2-Korean-Bllossom-3B
Text Generation • 3B • Updated • 10 • 2DavidAU/Deep-Reasoning-Llama-3.2-Instruct-uncensored-3B
Text Generation • 3B • Updated • 914 • 1DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • 3B • Updated • 289 • 3DavidAU/Deep-Reasoning-Llama-3.2-Overthinker-3B
Text Generation • 3B • Updated • 7 • 1DavidAU/Mistral-Grand-R1-Dolphin-3.0-Deep-Reasoning-Brainstorm-45B
Text Generation • 45B • Updated • 7 • 2DavidAU/Deep-Reasoning-Llama-3.2-COT-3B
Text Generation • 3B • Updated • 6DavidAU/Deep-Reasoning-Llama-3.2-Dolphin3.0-3B
Text Generation • 3B • Updated • 6DavidAU/Deep-Reasoning-Llama-3.2-Enigma-3B
Text Generation • 3B • Updated • 7DavidAU/Deep-Reasoning-Llama-3.2-ShiningValiant2-3B
Text Generation • 3B • Updated • 7DavidAU/Deep-Reasoning-Llama-3.2-BlackSheep-3B
Text Generation • 3B • Updated • 7 • 1DavidAU/Llama3.2-DeepHermes-3-3B-Preview-Reasoning-MAX-HORROR-Imatrix-GGUF
Text Generation • 3B • Updated • 683 • 1DavidAU/EXAONE-Deep-2.4B-Reasoning-MAX-NEO-Imatrix-GGUF
Text Generation • 3B • Updated • 88 • 3DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • 8B • Updated • 155 • 1DavidAU/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-Horror-Imatrix-MAX-8B-GGUF
Text Generation • 8B • Updated • 310 • 3DavidAU/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-GGUF
Text Generation • 8B • Updated • 151
DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-Hermes-R1-Uncensored-36B
Text Generation • 36B • Updated • 1.01kNote MOE - Mixture of Experts version. This model has 6 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-GGUF
Text Generation • 25B • Updated • 387Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-i1-GGUF
25B • Updated • 378 • 1Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-MOE-4X8B-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-e32-25B-GGUF
Text Generation • 25B • Updated • 67Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose.
mradermacher/L3.1-MOE-4X8B-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-25B-i1-GGUF
25B • Updated • 318 • 1Note MOE - Mixture of Experts version. This model has 4 times the power of a standard 8B model. It will have deeper thinking/reasoning and more complex prose. Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Evil-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
8B • Updated • 550 • 2Note Imatrix GGUF Quant version of my model by Team "mradermacher".
mradermacher/L3.1-Dark-Reasoning-Dark-Planet-Hermes-R1-Uncensored-8B-i1-GGUF
8B • Updated • 277 • 1Note Imatrix GGUF Quant version of my model by Team "mradermacher".
DavidAU/L3.1-Dark-Reasoning-Halu-Blackroot-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 6 • 1
DavidAU/L3.1-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 13 • 3Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Jamet-8B-MK.I-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 6 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Anjir-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 10 • 2Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/L3.1-Dark-Reasoning-Celeste-V1.2-Hermes-R1-Uncensored-8B
Text Generation • 8B • Updated • 4 • 1Note Links to GGUF / Imatrix GGufs also on this page.
DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them
Text Generation • Updated • 9DavidAU/L3.1-MOE-6X8B-Dark-Reasoning-Dantes-Peak-HORROR-R1-Uncensored-36B-GGUF
Text Generation • 36B • Updated • 786 • 3DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-Deep-Reasoning-32B-GGUF
Text Generation • 25B • Updated • 365 • 7DavidAU/Llama-3.1-1-million-ctx-DeepHermes-Deep-Reasoning-8B-GGUF
Text Generation • 8B • Updated • 306 • 1DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-COGITO-Deep-Reasoning-32B-GGUF
Text Generation • 25B • Updated • 370 • 3DavidAU/Qwen3-0.6B-NEO-Imatrix-Max-GGUF
Text Generation • 0.8B • Updated • 143DavidAU/Qwen3-0.6B-HORROR-Imatrix-Max-GGUF
Text Generation • 0.8B • Updated • 84DavidAU/Qwen3-1.7B-HORROR-Imatrix-Max-GGUF
Text Generation • 2B • Updated • 99 • 1DavidAU/Qwen3-1.7B-NEO-Imatrix-Max-GGUF
Text Generation • 2B • Updated • 143 • 1DavidAU/Qwen3-4B-HORROR-Imatrix-Max-GGUF
Text Generation • 4B • Updated • 76DavidAU/Qwen3-4B-NEO-Imatrix-Max-GGUF
Text Generation • 4B • Updated • 133 • 6DavidAU/Qwen3-8B-HORROR-Imatrix-Max-GGUF
Text Generation • 8B • Updated • 77DavidAU/Qwen3-8B-NEO-Imatrix-Max-GGUF
Text Generation • 8B • Updated • 35 • 1DavidAU/Qwen3-4B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • 4B • Updated • 166 • 3DavidAU/Qwen3-14B-HORROR-Imatrix-Max-GGUF
Text Generation • 15B • Updated • 39 • 3DavidAU/Qwen3-14B-NEO-Imatrix-Max-GGUF
Text Generation • 15B • Updated • 43DavidAU/Qwen3-8B-Q8_0-64k-128k-256k-context-GGUF
Text Generation • 8B • Updated • 51DavidAU/Qwen3-4B-Mishima-Imatrix-GGUF
Text Generation • 4B • Updated • 5 • 3DavidAU/Qwen3-32B-128k-HORROR-Imatrix-Max-GGUF
Text Generation • 33B • Updated • 51 • 2DavidAU/Qwen3-32B-128k-NEO-Imatrix-Max-GGUF
Text Generation • 33B • Updated • 41 • 2DavidAU/Qwen3-30B-A4.5B-12-Cooks
Text Generation • 31B • Updated • 5 • 5DavidAU/Qwen3-30B-A6B-16-Extreme-128k-context
Text Generation • 31B • Updated • 12 • 8DavidAU/Qwen3-8B-256k-Context-8X-Grand
Text Generation • 8B • Updated • 20DavidAU/Qwen3-8B-192k-Context-6X-Larger
Text Generation • 8B • Updated • 10DavidAU/Qwen3-8B-128k-Context-4X-Large
Text Generation • 8B • Updated • 10DavidAU/Qwen3-8B-96k-Context-3X-Medium-Plus
Text Generation • 8B • Updated • 7DavidAU/Qwen3-8B-64k-Context-2X-Medium
Text Generation • 8B • Updated • 8 • 1DavidAU/Qwen3-8B-320k-Context-10X-Massive
Text Generation • 8B • Updated • 42DavidAU/Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
Text Generation • 8B • Updated • 1.41k • 3DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation • 8B • Updated • 571 • 6DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation • 8B • Updated • 73 • 6DavidAU/Qwen3-8B-192k-Josiefied-Uncensored-NEO-Max-GGUF
Text Generation • 8B • Updated • 1.96k • 22DavidAU/Qwen3-8B-192k-Josiefied-Uncensored-HORROR-Max-GGUF
Text Generation • 8B • Updated • 34 • 2DavidAU/Qwen3-30B-A1.5B-64K-High-Speed-NEO-Imatrix-MAX-gguf
Text Generation • 31B • Updated • 753 • 13DavidAU/Llama-3.2-8X3B-GATED-MOE-Reasoning-Dark-Champion-Instruct-uncensored-abliterated-18.4B-GGUF
Text Generation • 18B • Updated • 2.75k • 8DavidAU/Llama-3.2-8X3B-GATED-MOE-NEO-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation • 18B • Updated • 1.65k • 6DavidAU/Llama-3.2-8X3B-GATED-MOE-Horror-Reasoning-Dark-Champion-uncensored-18.4B-IMAT-GGUF
Text Generation • 18B • Updated • 1.15k • 2
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated • 128Note Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum potential - but all models (and quants) online (regardless of the repo) to their maximum potential. Included quick start and detailed notes, include AI / LLM apps and other critical information and references too. A must read if you are using any AI/LLM right now.
DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE
Text Generation • Updated • 57Note SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp, Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model. Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with little to no parameter/samplers adjustments too.
DavidAU/Qwen3-The-Josiefied-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 611 • 10DavidAU/Qwen3-The-Xiaolong-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 366 • 3DavidAU/Qwen3-The-Xiaolong-Josiefied-Omega-Directive-22B-uncensored-abliterated-GGUF
Text Generation • 22B • Updated • 941 • 10DavidAU/Magistral-Small-2506-Reasoning-24B-NEO-MAX-Imatrix-GGUF
Text Generation • 24B • Updated • 808 • 3DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
Text Generation • 33B • Updated • 1.31k • 7DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-GGUF
Text Generation • 17B • Updated • 850 • 1DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF
Text Generation • 17B • Updated • 3.47k • 10DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-Abliterated-Uncensored
Text Generation • 33B • Updated • 5 • 1DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-II-Instruct-2506
Text Generation • 46B • Updated • 15 • 6DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-128k
Text Generation • 33B • Updated • 9DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-II-Instruct-2506-GGUF
Text Generation • 45B • Updated • 1.36k • 2DavidAU/Mistral-Small-3.2-46B-The-Brilliant-Raconteur-Instruct-2506-GGUF
Text Generation • 45B • Updated • 257DavidAU/Qwen2.5-OpenCodeReasoning-Nemotron-1.1-7B-NEO-imatix-gguf
Text Generation • 8B • Updated • 1.11kDavidAU/Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf
Text Generation • 44B • Updated • 5.73kDavidAU/Qwen3-Zero-Coder-Reasoning-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 14.5k • 11DavidAU/Qwen3-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16DavidAU/Qwen3-Polaris-Preview-128k-6B-Brainstorm20x
Text Generation • 6B • Updated • 19 • 1DavidAU/Qwen3-Blitzar-Coder-F1-6B-Brainstorm20x
Text Generation • 6B • Updated • 22 • 2DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x
Text Generation • 6B • Updated • 15DavidAU/Qwen3-Instruct-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 39 • 1DavidAU/Qwen3-Code-Reasoning-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 80 • 1DavidAU/Qwen3-Instruct-F16-6B-Brainstorm20x-128k-ctx
Text Generation • 6B • Updated • 22DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-6B-Brainstorm20x
Text Generation • 6B • Updated • 16DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32
Text Generation • 6B • Updated • 17 • 1DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-128k-ctx
Text Generation • 6B • Updated • 15DavidAU/Qwen3-Esper3-Reasoning-Instruct-6B-Brainstorm20x-Enhanced-E32-192k-ctx
Text Generation • 6B • Updated • 15DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x
Text Generation • 12B • Updated • 40DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-12B-Brainstorm20x-128k-ctx
Text Generation • 12B • Updated • 120DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B
Text Generation • 0.8B • Updated • 129DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B-NEO-EX-GGUF
Text Generation • 0.8B • Updated • 3.62k • 1DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x
Text Generation • 21B • Updated • 18 • 2DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32
Text Generation • 2B • Updated • 14DavidAU/Qwen3-Shining-Lucy-CODER-2.4B
Text Generation • 2B • Updated • 20DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-mix2
Text Generation • 2B • Updated • 13DavidAU/Qwen3-Shining-Lucy-CODER-2.4B-e32-mix2
Text Generation • 2B • Updated • 13DavidAU/Qwen3-Shining-Lucy-CODER-3.4B-Brainstorm20x-e32
Text Generation • 3B • Updated • 9DavidAU/Qwen3-Shining-Valiant-Instruct-CODER-Reasoning-2.7B
Text Generation • 3B • Updated • 19DavidAU/Qwen3-Shining-Valiant-Instruct-Fast-CODER-Reasoning-2.4B
Text Generation • 2B • Updated • 34 • 1DavidAU/Mistral-Magistral-Devstral-Instruct-FUSED-CODER-Reasoning-36B
Text Generation • 36B • Updated • 11DavidAU/Qwen3-Esper3-Reasoning-CODER-Instruct-21B-Brainstorm20x-128k-ctx
Text Generation • 21B • Updated • 13DavidAU/Qwen3-53B-A3B-2507-THINKING-TOTAL-RECALL-v2-MASTER-CODER
Text Generation • 53B • Updated • 33 • 3DavidAU/Openai_gpt-oss-20b-CODER-NEO-CODE-DI-MATRIX-GGUF
Text Generation • 21B • Updated • 3.45k • 5DavidAU/Openai_gpt-oss-20b-NEO-GGUF
Text Generation • 21B • Updated • 7.43k • 9DavidAU/Openai_gpt-oss-120b-NEO-Imatrix-GGUF
Text Generation • 117B • Updated • 5.69kDavidAU/OpenAi-GPT-oss-20b-abliterated-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 40.1k • 28DavidAU/OpenAi-GPT-oss-20b-MODERATE-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 2.21k • 4DavidAU/OpenAi-GPT-oss-20b-LIGHT-uncensored-NEO-Imatrix-gguf
Text Generation • 21B • Updated • 3k