Qwen3-8B-64k-Context-2X-Josiefied-Uncensored
This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
This repo is for Goekdeniz-Guelmez's excellent "Josiefied-Qwen3-8B-abliterated-v1", modified from 32k (32768) context to 64 k (65536) context modified using YARN as per tech notes at Qwen repo.
NEO Imatrix dataset GGUF quants with maxed 16-bit output tensor are here:
[ https://huggingface.co/DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-NEO-Max-GGUF ]
ORG model repo for this fine tune:
[ https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-8B-abliterated-v1 ]
Max context on this version is : 64k (65536)
Suggest min context limit of : 8k to 16k for "thinking" / "output".
Use Jinja Template or CHATML template.
Please refer the QWEN model card for details, benchmarks, how to use, settings, turning reasoning on/off/ system roles etc etc :
[ https://huggingface.co/Qwen/Qwen3-8B ]
OPTIONAL SYSTEM ROLE:
You may or may not need this, as most times Qwen3s generate their own reasoning/thinking blocks.
You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
See document "Maximizing-Model-Performance-All..." below for how to "set" system role in various LLM/AI apps below.
IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
NOTE:
I strongly suggest you also visit the DavidAU GGUF (below) repo too for more details in using this model ; especially if it is "Class 3" or "Class 4" to get maximum performance from the model.
For full information about this model, including:
- Details about this model and its use case(s).
- Context limits
- Special usage notes / settings.
- Any model(s) used to create this model.
- Template(s) used to access/use this model.
- Example generation(s)
- GGUF quants of this model
Please go to:
https://huggingface.co/DavidAU/Qwen3-8B-64k-Josiefied-Uncensored-NEO-Max-GGUF
[ Also see LEFT MENU under "Quantizations" ]
[[ model card updates to follow || GGUF repo(s) pending ... ]]
- Downloads last month
- 385