File size: 3,326 Bytes
9a71f41 93d0701 8b8786a 93d0701 9a71f41 8b8786a 9a71f41 93d0701 9a71f41 93d0701 ca1e592 9a71f41 02c6b11 9a71f41 02c6b11 723e36d 0453aa1 9a71f41 56069ea 723e36d 9a71f41 f2680c4 9a71f41 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 |
---
license: apache-2.0
tags:
- bvv
- frozen-embeddings
- language-model
- Russian
- English
- conceptual-demo
- toy-model
- academic
model-index:
- name: Bochkov/best_bvv_unfrozen_ru
results:
- task:
type: text-generation
metrics:
- name: MMLU (average)
type: mmlu
value: 11.37
---
# best_bvv_unfrozen_ru
## Model summary
**best_bvv_unfrozen_ru** is a 500M parameter Causal Language Model (LM) for Russian (and some English), trained as an open proof-of-concept for the "frozen embeddings" paradigm. This version uses **fully trainable token embeddings** – a standard setup – and serves as a baseline for direct comparison with the corresponding "frozen-embedding" model [`Bochkov/best_bvv_ru`](https://huggingface.co/Bochkov/best_bvv_ru).
- **Architecture:** Transformer, rotary positional encoding
- **Vocabulary:** Custom Unicode-based, 131072 tokens
- **Embedding:** *Unfrozen* (trainable, classic)
- **Pretraining data:** 9B tokens, predominantly Russian (Wikipedia, SQuAD2.0, TriviaQA, NQ etc) and 10% SFT (instruction/factual Q&A) mixed in
- **Purpose:** Compare learning capacity and generalization of full vs. frozen-embedding LMs on small data
## Key results
- **MMLU (avg):** 11.37% (±0.18%)
- **ARC-e:** 20.56%
- **ARC-c:** 24.18%
- **C-Sense:** 18.79%
- **SQUAD:** 13.55%
- **BLEU [en-ru]:** 8.40%
## Intended use
- **Research & benchmarking:** Designed to benchmark the new paradigm of "frozen" vs. traditional embedding LMs under realistic, small-data conditions.
- **Comparison:** Use alongside [`Bochkov/best_bvv_ru`] for ablation studies, transfer/interlingua research and MoE fusion experiments.
- **NOT for production!** This model is for research and experimentation only. Text quality is moderate, factual hallucinations possible.
## 🧑🔬 Citation & Concept
If you use or build upon this demo, please cite:
```
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
```
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs — a step toward modular, fusable, multilingual LMs.
## Example Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained('Bochkov/best_bvv_unfrozen_ru', trust_remote_code=True).to('cuda')
tokenizer = AutoTokenizer.from_pretrained('Bochkov/best_bvv_unfrozen_ru')
inputs = tokenizer("Hello, мир! ", return_tensors="pt").to('cuda')
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0])) |