best_bvv_unfrozen_ru

Model summary

best_bvv_unfrozen_ru is a 500M parameter Causal Language Model (LM) for Russian (and some English), trained as an open proof-of-concept for the "frozen embeddings" paradigm. This version uses fully trainable token embeddings – a standard setup – and serves as a baseline for direct comparison with the corresponding "frozen-embedding" model Bochkov/best_bvv_ru.

  • Architecture: Transformer, rotary positional encoding
  • Vocabulary: Custom Unicode-based, 131072 tokens
  • Embedding: Unfrozen (trainable, classic)
  • Pretraining data: 9B tokens, predominantly Russian (Wikipedia, SQuAD2.0, TriviaQA, NQ etc) and 10% SFT (instruction/factual Q&A) mixed in
  • Purpose: Compare learning capacity and generalization of full vs. frozen-embedding LMs on small data

Key results

  • MMLU (avg): 11.37% (Β±0.18%)
  • ARC-e: 20.56%
  • ARC-c: 24.18%
  • C-Sense: 18.79%
  • SQUAD: 13.55%
  • BLEU [en-ru]: 8.40%

Intended use

  • Research & benchmarking: Designed to benchmark the new paradigm of "frozen" vs. traditional embedding LMs under realistic, small-data conditions.
  • Comparison: Use alongside [Bochkov/best_bvv_ru] for ablation studies, transfer/interlingua research and MoE fusion experiments.
  • NOT for production! This model is for research and experimentation only. Text quality is moderate, factual hallucinations possible.

πŸ§‘β€πŸ”¬ Citation & Concept

If you use or build upon this demo, please cite:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β€” a step toward modular, fusable, multilingual LMs.

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_unfrozen_ru', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_unfrozen_ru')
inputs = tokenizer("Hello, ΠΌΠΈΡ€! ", return_tensors="pt").to('cuda')
outputs = model.generate(
    **inputs, 
    max_new_tokens=100, 
    temperature=0.8, 
    top_k=50, 
    top_p=0.95, 
    do_sample=True
)
print(tokenizer.decode(outputs[0]))
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Bochkov/best_bvv_unfrozen_ru