Caution: The training for this model is intense enough to alter "real world facts", and bring them in part into the ST/TNG Universe.
Qwen3-MOE-2x6B-ST-The-Next-Generation-II-FreakStorm-12B

This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly.
This model is specifically for TNG / Star Trek, science fiction, story generation (all genres) but also does coding and general tasks too.
This version has been joined with "Freakstorm" (a horror fine tune) in a MOE - mixture of experts - config. In this case 2x6B - 12B parameters. With compression this creates a model of 10.4B - all the power of 12B in 10.4B package.
This MOE drastically upscales the TNG components of the model.
This model can also be used for Role play.
Example generations at the bottom of this page.
This is a far stronger fine tune, taking you deeper into the ST-TNG universe, than version 1 and the 32 bit precision version of II.
This is a Star Trek The Next Generation ( Special thanks to "progs2002" for this FANTASIC dataset) fine tune (11% of the model, close to 700 million parameters), 6 epochs on this model (a 4B model + Brainstorm 20x adapter):
https://huggingface.co/DavidAU/Qwen3-Jan-v1-256k-ctx-6B-Brainstorm20x
Then the modified "Freakstorm Horror" model was used as a base to further enhance the model [see model tree, benchmarks avail].
Information on the ORG Jan V1 4B (model info below), followed by Brainstorm 20x adapter (by DavidAU) and then a complete help section for running LLM / AI models.
This model has 55 layers, and 667 tensors [moe config].
The Brainstorm adapter improves creavity, code generation, and unique code solving abilities.
The fine tuning alters the prose generation and general creative abilities the "TNG" Universe.
The fine tuning (using Unsloth for Win 11) also affects the Brainstorm adapter too.
Model's thinking / reasoning are not affected either - they are fully intact.
For creative uses: Increases depth, detail and general "there" in the prose.
Example for creative at bottom of the page.
This model requires:
- Jinja (embedded) or CHATML template
- Max context of 256k.
Settings used for testing (suggested):
- Temp .3 to .7 (but .8 to 1.5 for creative)
- Rep pen 1.05 to 1.1
- Topp .8 , minp .05
- Topk 20
- Min context of 8k for thinking / output.
- No system prompt.
This model will respond well to both detailed instructions and step by step refinement and additions to code.
Likewise for creative use cases.
Here is a review of this model's operations:
As this is an instruct model, it will also benefit from a detailed system prompt too.
For simpler coding problems, lower quants will work well; but for complex/multi-step problem solving suggest Q6 or Q8.
QUANTS:
GGUF? GGUF Imatrix? Other?
Special thanks to Team Mradermacher, Team Nightmedia and other quanters!
See under "model tree", upper right and click on "quantizations".
New quants will automatically appear.
About Jan V1
Jan-v1: Advanced Agentic Language Model
Overview
Jan-v1 is the first release in the Jan Family, designed for agentic reasoning and problem-solving within the Jan App. Based on our Lucy model, Jan-v1 achieves improved performance through model scaling.
Jan-v1 uses the Qwen3-4B-thinking model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
Performance
Question Answering (SimpleQA)
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.
The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.
Chat Benchmarks
These benchmarks evaluate the model's conversational and instructional capabilities.
Quick Start
Integration with Jan App
Jan-v1 is optimized for direct integration with the Jan App. Simply select the model from the Jan App interface for immediate access to its full capabilities.
Local Deployment
Using vLLM:
vllm serve janhq/Jan-v1-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
Using llama.cpp:
llama-server --model jan-v1.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
Recommended Parameters
temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048
🤝 Community & Support
- Discussions: HuggingFace Community
- Jan App: Learn more about the Jan App at jan.ai
(*) Note
By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file chat_template_raw.jinja.
See more here:
https://huggingface.co/janhq/Jan-v1-4B-GGUF
What is Brainstorm?
Brainstorm 20x
The BRAINSTORM process was developed by David_AU.
Some of the core principals behind this process are discussed in this scientific paper : Progressive LLaMA with Block Expansion .
However I went in a completely different direction from what was outlined in this paper.
What is "Brainstorm" ?
The reasoning center of an LLM is taken apart, reassembled, and expanded.
In this case for this model: 20 times
Then these centers are individually calibrated. These "centers" also interact with each other. This introduces subtle changes into the reasoning process. The calibrations further adjust - dial up or down - these "changes" further. The number of centers (5x,10x etc) allow more "tuning points" to further customize how the model reasons so to speak.
The core aim of this process is to increase the model's detail, concept and connection to the "world", general concept connections, prose quality and prose length without affecting instruction following.
This will also enhance any creative use case(s) of any kind, including "brainstorming", creative art form(s) and like case uses.
Here are some of the enhancements this process brings to the model's performance:
- Prose generation seems more focused on the moment to moment.
- Sometimes there will be "preamble" and/or foreshadowing present.
- Fewer or no "cliches"
- Better overall prose and/or more complex / nuanced prose.
- A greater sense of nuance on all levels.
- Coherence is stronger.
- Description is more detailed, and connected closer to the content.
- Simile and Metaphors are stronger and better connected to the prose, story, and character.
- Sense of "there" / in the moment is enhanced.
- Details are more vivid, and there are more of them.
- Prose generation length can be long to extreme.
- Emotional engagement is stronger.
- The model will take FEWER liberties vs a normal model: It will follow directives more closely but will "guess" less.
- The MORE instructions and/or details you provide the more strongly the model will respond.
- Depending on the model "voice" may be more "human" vs original model's "voice".
Other "lab" observations:
- This process does not, in my opinion, make the model 5x or 10x "smarter" - if only that was true!
- However, a change in "IQ" was not an issue / a priority, and was not tested or calibrated for so to speak.
- From lab testing it seems to ponder, and consider more carefully roughly speaking.
- You could say this process sharpens the model's focus on it's task(s) at a deeper level.
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
For more information / other Qwen/Mistral Coders / additional settings see:
[ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
Help, Adjustments, Samplers, Parameters and More
CHANGE THE NUMBER OF ACTIVE EXPERTS:
See this document:
https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:
In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
Set the "Smoothing_factor" to 1.5
: in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
: in text-generation-webui -> parameters -> lower right.
: In Silly Tavern this is called: "Smoothing"
NOTE: For "text-generation-webui"
-> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
Source versions (and config files) of my models are here:
OTHER OPTIONS:
Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers
This a "Class 1" model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
Examples, Q4_K_S, Temp .8
This will be low to mid-range quality, expect better at higher quants / imatrix quants.
Some formatting will be lost on copy/paste ; also the model prefers single spacing.
- Downloads last month
- 29