best_bvv_zh

best_bvv_zh is a conceptual bilingual (English + Chinese) transformer language model trained from scratch on a limited-size 9B-token corpus, as a demonstration of the frozen-embedding hypothesis for robust, language-agnostic and easily-combinable language models.

  • Embedding matrix is frozen after visual-based (Unicode-morpheme) initialization.
  • All transformer layers and output head are trainable.

Key features

  • Trained on small English+Chinese dataset.

  • Vocabulary: 131072 (Unicode/visual + frequent n-grams).

  • 16-layer transformer, 1024 hidden dim, 32 heads.

  • Demonstrates that frozen, compositional, language-agnostic embeddings allow for stable representation learning and can be directly combined into Mixture-of-Experts (MoE) models.

  • Direct comparison to "unfrozen" version Bochkov/best_bvv_unfrozen_zh.

Intended use

  • Academic and engineering demonstration.
  • Proof-of-concept for multilingual/fusion/frozen-embedding MoE research.
  • NOT intended or suitable for actual production generation or factual knowledge (corpus ~9B tokens only).

Model comparison (vs unfrozen baseline)

Model Total Params MMLU avg (%) BLEU en-zh (%) BLEU zh-en (%)
Bochkov/best_bvv_zh (frozen) 0.5B 19.4 1.41 7.78
Bochkov/best_bvv_unfrozen_zh (baseline) 0.5B 14.0 1.65 5.93

πŸ§‘β€πŸ”¬ Citation & Concept

If you use or build upon this demo, please cite:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β€” a step toward modular, fusable, multilingual LMs.

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_ru', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_ru')
inputs = tokenizer("Hello, ΠΌΠΈΡ€! ", return_tensors="pt").to('cuda')
outputs = model.generate(
    **inputs, 
    max_new_tokens=100, 
    temperature=0.8, 
    top_k=50, 
    top_p=0.95, 
    do_sample=True
)
print(tokenizer.decode(outputs[0]))
Downloads last month
13
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Bochkov/best_bvv_zh

Evaluation results