huihui-ai/QwQ-32B-Coder-Fusion-9010

Overview

QwQ-32B-Coder-Fusion-9010 is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.
The weights are blended in a 9:1 ratio, with 90% of the weights from the abliterated QwQ-32B-Preview-abliterated and 10% from the abliterated Qwen2.5-Coder-32B-Instruct-abliterated model. Although it's a simple mix, the model is usable, and no gibberish has appeared. This is an experiment. I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.
Now the effective ratios are 9:1, 8:2, and 7:3. Any other ratios (6:4,5:5) would result in mixed or unclear expressions.

Please refer to the mixed source code.

Model Details

ollama

You can use huihui_ai/qwq-fusion directly,

ollama run huihui_ai/qwq-fusion

Other proportions can be obtained by visiting huihui_ai/qwq-fusion.

Downloads last month
245
Safetensors
Model size
32.8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for huihui-ai/QwQ-32B-Coder-Fusion-9010

Base model

Qwen/Qwen2.5-32B
Finetuned
(5)
this model
Finetunes
1 model
Merges
5 models
Quantizations
4 models

Space using huihui-ai/QwQ-32B-Coder-Fusion-9010 1

Collection including huihui-ai/QwQ-32B-Coder-Fusion-9010