Model Card for abs-bvv-4
Model Description
abs-bvv-4
is a 1.9 billion parameter decoder-only Transformer model. It is the 4th model in the Progressive Growth Transformers (PGT) series, designed to explore how linguistic and reasoning capabilities emerge as a function of model depth.
This model was not trained monolithically. Instead, it was "grown" constructively, one layer at a time, upon a foundation of frozen, non-semantic visual embeddings, as introduced in the paper "Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations".
The core idea is to demonstrate an alternative, more modular and resource-efficient paradigm for building LLMs. The PGT series shows that:
- Semantic understanding can emerge without trainable embeddings.
- Complex reasoning abilities are a direct result of compositional depth.
- Models can be built incrementally, much like a living organism grows, rather than being forged all at once.
abs-bvv-4
represents the state of the model after 4 layers of progressive training. It has 4 Transformer blocks, a hidden dimension of 4096, and uses the bvv241
tokenizer family.
Intended Use
This model is primarily an artifact for research into emergent capabilities, constructive learning, and the role of embeddings in LLMs. It can be used for text generation, but it is not fine-tuned for specific downstream tasks and may produce unpredictable outputs. It is suitable for exploring the raw capabilities of a model trained under this novel paradigm.
Training Details
Architecture: 4-layer Decoder-Only Transformer (n_layer=4, d_model=4096, n_head=32).
Embeddings: The token embedding layer is frozen and derived from visual representations of Unicode glyphs. It is never updated during training.
Training Method: Progressive Layer-Wise Growth. The model was built by training one layer at a time. Layer 1 was trained to convergence, then frozen. Layer 2 was added and trained, etc. For deeper layers (5 and 6), LoRA was used to fine-tune all existing layers simultaneously with the new layer to ensure global coherence.
Parameters: Total: 1.9B.
Data: A ~9B token mix of Wikipedia and SFT datasets (10%).
Limitations and Bias
This model is a research prototype and has several limitations:
Not Instruction-Tuned: It is a base model and will not follow instructions or engage in dialogue reliably.
Potential for Hallucinations: Like all LLMs, it can generate factually incorrect or nonsensical text.
Data Bias: Trained primarily on Wikipedia, it will reflect the biases present in that corpus.
Limited Scope: The model was trained on a relatively small dataset (9B tokens) compared to state-of-the-art models. Its performance is intended to be evaluated relative to its own baseline (trainable embeddings) and shallower versions, not against giant commercial models.
π§βπ¬ Citation & Concept
If you use this model or the underlying concepts in your research, please cite our work:
@misc{bochkov2025emergentsemanticstokenembeddings,
title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations},
author={A. Bochkov},
year={2025},
eprint={2507.04886},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.04886},
}
@misc{bochkov2025growingtransformersmodularcomposition,
title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate},
author={A. Bochkov},
year={2025},
eprint={2507.07129},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.07129},
}
This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β a step toward modular, fusable, multilingual LMs.
How to Use
The model can be loaded using the transformers
library. Note that trust_remote_code=True
is required as it uses a custom model architecture.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained('Bochkov/abs-bvv-4')
model = AutoModelForCausalLM.from_pretrained('Bochkov/abs-bvv-4', trust_remote_code=True, torch_dtype=torch.bfloat16).to('cuda')
inputs = tokenizer("Hello, I am a language model ", return_tensors="pt").to('cuda')
# Generate text
outputs = model.generate(
**inputs,
max_new_tokens=100,
temperature=0.8,
top_k=50,
top_p=0.95,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 8