Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

This is a merge of Bytedance Seed-OSS-36B Base and Instruct, using the karcher-means method in mergekit, with the idea being to get Bytedance Instruct to 'feel' and write more like a raw continuation model.

Karcher was tested because this and SLERP are seemingly the only viable ways to merge an instruct and base model.

Quantized, it gets an MMLU score (via the exllamav3 eval script) of 11853/ 14042 = 84.41% correct, ( 80.41% prob.)

For reference, ByteDance's instruct model (with the exact same quantization settings) gets 11680/ 14042 = 83.18% correct, ( 80.96% prob.) The base model by itself: 11851/ 14042 = 84.40% correct, ( 76.96% prob.)

This upload is a custom ~4.22bpw exl3 quantization, with 5bpw attention heads and 4bpw MLP layers. If you want a different size quantization, just ask.

Merge Details

Merge Method

This model was merged using the Karcher Mean merge method using /home/alpha/Models/Raw/ByteDance-Seed_Seed-OSS-36B-Instruct as a base.

Models Merged

The following models were included in the merge:

  • /home/alpha/Models/Raw/ByteDance-Seed_Seed-OSS-36B-Base

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: /home/alpha/Models/Raw/ByteDance-Seed_Seed-OSS-36B-Base
  - model: /home/alpha/Models/Raw/ByteDance-Seed_Seed-OSS-36B-Instruct
merge_method: karcher
tokenizer:
  source: "base"
base_model: /home/alpha/Models/Raw/ByteDance-Seed_Seed-OSS-36B-Instruct
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
24
Safetensors
Model size
10.3B params
Tensor type
F16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Downtown-Case/Seed-OSS-36B-Base-Instruct-Karcher-Merge-exl3-4.22bpw-hb8