nemo_bvv_moe

nemo_bvv_moe is a multi-lingual Mixture-of-Experts (MoE) model constructed by combining nemo_bvv_ru and nemo_bvv_zhβ€”thanks to their fully shared and frozen token embeddings, making direct model fusion feasible without re-training embeddings.

Model Details

  • Parameters: ~800M (MoE: two 12x1024 transformer branches)
  • Languages: Russian, Chinese
  • Tokenizer: SOTA (Mistral Nemo, precomputed embeddings)
  • Training: Small research corpus, 10% SFT/Pretrain mix
  • MoE Fusion: Models combined by shared embeddings, logits averaged.

Key Results (Selected)

  • MMLU (average): 8.99%
  • ARC-e: 22.44%
  • ARC-c: 23.75%
  • Commonsense-QA: 19.90%
  • SQUAD: 7.70%
  • BLEU [en-ru]: 4.13%
  • BLEU [ru-en]: 3.37%
  • BLEU [en-zh]: 0.88%
  • BLEU [zh-en]: 2.26%

Limitations

  • Trained on a tiny research corpus for demonstration only.
  • Accuracy is far below large-scale production models.
  • Not suitable for commercial or mission-critical tasks.
  • No safety or fairness tuning.
  • First open MoE fusion for Russian/Chinese with precomputed/frozen SOTA embeddings.
  • Intended only for research/demonstration of model fusion, not high-accuracy tasks.
  • Proof: SOTA tokenizers can be used for compatible precomputed embeddings, industry can repeat with their tokenizers.

πŸ§‘β€πŸ”¬ Citation & Concept

If you use this model or the underlying concepts in your research, please cite our work:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β€” a step toward modular, fusable, multilingual LMs.

Usage Example

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained('Bochkov/nemo_bvv_moe', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained('Bochkov/nemo_bvv_moe', trust_remote_code=True).to('cuda')
inputs = tokenizer("Input prompt in Russian or Chinese...", return_tensors="pt").to('cuda')
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
19
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Bochkov/nemo_bvv_moe