remnant-glm4-32b / README-cn.md
Fizzarolli's picture
Update README-cn.md
68e4b32 verified
metadata
library_name: transformers
license: apache-2.0
base_model: THUDM/GLM-4-32B-0414
tags:
  - roleplay
  - conversational
  - axolotl
  - qwen

残响 GLM4 32B(第一系列)

English | 简体中文

空气中飘浮着一缕尘埃。它仿佛来自某个逝去的时代,但你无从追溯。它落在你的舌尖,滋味奇妙。

image/png

「残响」是一系列专注于SFW与NSFW角色扮演及对话的微调大语言模型。

量化版本

GGUF:

  • 待补充!

EXL3:

  • 待补充!

EXL2:

  • 待补充!

其他格式:

  • 待补充!

推荐参数

对话模板: GLM4
采样器设置:

  • 温度值 1.0
  • 最小概率阈值 0.1

致谢

特别感谢Allura和ilya <3
衷心感谢以下项目的开发者:

  • Axolotl(训练框架)
  • 智谱AI(基础模型)
  • Prime Intellect(算力支持)
  • 以及我的银行(资金支持)

其他信息

Built with Axolotl

查看Axolotl配置

axolotl版本: 0.10.0.dev0

# === 模型配置 ===
base_model: THUDM/GLM-4-32B-0414  # e.g. "mistralai/Mistral-Small-24B-Instruct-2501"
load_in_8bit: false
load_in_4bit: true

# === 训练设置 ===
num_epochs: 2
micro_batch_size: 3
gradient_accumulation_steps: 2
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true

# === 超参数配置 ===
optimizer: adamw_8bit
# Apollo-mini配置:
#optim_args: "proj=random,rank=1,scale=128.0,scale_type=tensor,update_proj_gap=200"
# 标准Apollo配置:
# optim_args: 
#optim_target_modules: all_linear
learning_rate: 1e-5
lr_scheduler: rex
weight_decay: 0.01
warmup_ratio: 0.05

# === LoRA ===
adapter: qlora
lora_r: 16
lora_alpha: 32
lora_dropout: 0.25
lora_target_modules:
lora_target_linear: true

# === 数据配置 ===
datasets:
  - path: allura-org/inkmix-v3.0
    type: chat_template
    split: train
    field_messages: conversations
    message_field_role: from
    message_field_content: value
    train_on_eos: all

dataset_prepared_path: last_run_prepared
chat_template: jinja
chat_template_jinja: |
  [gMASK]<sop>{%- for msg in messages %}{%- if msg.role == 'system' %}<|system|>
  {{ msg.content }}{%- elif msg.role == 'user' %}<|user|>
  {{ msg.content }}{%- elif msg.role == 'assistant' %}<|assistant|>
  {{ msg.content }}{%- endif %}{%- endfor %}{% if add_generation_prompt %}<|assistant|>{% else %}<|user|>{% endif %}

# === 插件 ===
plugins:
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin

# === 硬件优化 ===
gradient_checkpointing: offload
gradient_checkpointing_kwargs:
  use_reentrant: false
cut_cross_entropy: true
wandb_project: glm4-32b-inkmix-v3

# === Wandb追踪 ===
wandb_project: qwen3-8b-inkmix-v3

# === 检查点 ===
saves_per_epoch: 2
save_total_limit: 3

# === 高级设置 ===
output_dir: /workspace/ckpts
bf16: auto
flash_attention: true
train_on_inputs: false
group_by_length: false
logging_steps: 1
trust_remote_code: true