BVV-MoE: Mixture-of-Experts LLM with Frozen Shared Embeddings (Russian + Chinese, Demo-Scale)
Model size: ~0.9B parameters
Languages: Russian, Chinese, some English
Model Summary
best_bvv_moe is a demonstration-scale Mixture-of-Experts (MoE) decoder-only causal language model combining two independently trained models (Russian and Chinese) with strictly frozen, shared visual/Unicode-based token embeddings.
- Each "expert" was pre-trained on a small subordinate corpus (English-Russian, English-Chinese) with ~9B total tokens, mixing 10% SFT-like samples, using the same, fully frozen embedding matrix for all languages.
- After separate training, the two models were seamlessly merged at the transformer block level using a "mean logits" MoE fusion approach β thanks to the shared frozen token embeddings, no retraining/alignment of embeddings was needed.
- This model is a conceptual/research artifact, designed to illustrate that frozen, non-semantic embeddings enable combining multilingual LMs into a working MoE model without catastrophic loss of performance.
Key Features
- Frozen, Unicode/visual token embeddings: All tokens (for all supported languages) share the same frozen embedding matrix, based on Unicode and visual forms, not statistical co-occurrence.
- Direct Mixture-of-Experts merge: Two language models (Russian-, Chinese-oriented) are combined without retraining via simple logits averaging, made possible by the strictly-shared embeddings.
- Demo-scale: Trained on a modest dataset (9B tokens), with a small SFT fraction (~10%), intended to illustrate the principle, not to maximize absolute scores.
- Comparison available: Separately released standard (unfrozen embeddings) models for direct comparison of convergence and generalization.
- Extremely "clean" codebase: No reliance on exotic pipeline tricks; clear transformer architecture, easy to review and experiment with.
Use Case / Intended Purpose
This model is not an end-user chatbot solution.
Its purpose is:
- To demonstrate new possibilities in LM architecture:
- Multilingual/multimodal MoE with frozen, shared embeddings
- Modular, "plug-and-play" scaling and mixing of LMs
- Comparison between frozen and unfrozen/learnable embeddings in real convergence
- As a reference implementation for research communities investigating model unification, low-resource language mixing, or studying where "meaning" emerges inside LLM architectures.
Evaluation
MMLU (across tasks, test set mean Β± std):
MMLU: 23.44% Β± 0.28%
ARC-e: 23.74% Β± 1.02%
ARC-c: 25.28% Β± 2.07%
C-SENSE: 19.69% Β± 1.13%
SQUAD: 19.73% Β± 1.45%
BLEU:
en-ru: 6.52% Β± 0.62% ru-en: 6.22% Β± 0.38% en-zh: 2.93% Β± 0.34% zh-en: 4.95% Β± 0.59%
π§βπ¬ Citation & Concept
If you use or build upon this demo, please cite:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_moe', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_moe')
inputs = tokenizer("Hello, ΠΌΠΈΡ! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 15