Model Card for abs-bvv-5

Model Description

abs-bvv-5 is a 2.1 billion parameter decoder-only Transformer model. It is the 5th model in the Progressive Growth Transformers (PGT) series, designed to explore how linguistic and reasoning capabilities emerge as a function of model depth.

This model was not trained monolithically. Instead, it was "grown" constructively, one layer at a time, upon a foundation of frozen, non-semantic visual embeddings, as introduced in the paper "Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations".

The core idea is to demonstrate an alternative, more modular and resource-efficient paradigm for building LLMs. The PGT series shows that:

  1. Semantic understanding can emerge without trainable embeddings.
  2. Complex reasoning abilities are a direct result of compositional depth.
  3. Models can be built incrementally, much like a living organism grows, rather than being forged all at once.

abs-bvv-5 represents the state of the model after 5 layers of progressive training. It has 5 Transformer blocks, a hidden dimension of 4096, and uses the bvv241 tokenizer family.

Intended Use

This model is primarily an artifact for research into emergent capabilities, constructive learning, and the role of embeddings in LLMs. It can be used for text generation, but it is not fine-tuned for specific downstream tasks and may produce unpredictable outputs. It is suitable for exploring the raw capabilities of a model trained under this novel paradigm.

Performance

The model was evaluated on several standard benchmarks. Scores reflect performance on held-out test sets. Benchmark Score (%) Οƒ (%)

MMLU 20.33% 0.34%

ARC-e 20.42% 0.90%

ARC-c 23.24% 1.30%

C-SENSE 19.80% 1.00%

SQuAD 2.30% 0.73%

A key finding from the PGT series is the emergence of extractive QA capabilities (SQuAD) only in deeper models.

Training Details

Architecture: 5-layer Decoder-Only Transformer (n_layer=5, d_model=4096, n_head=32).

Embeddings: The token embedding layer is frozen and derived from visual representations of Unicode glyphs. It is never updated during training.

Training Method: Progressive Layer-Wise Growth. The model was built by training one layer at a time. Layer 1 was trained to convergence, then frozen. Layer 2 was added and trained, etc. For deeper layers (5 and 6), LoRA was used to fine-tune all existing layers simultaneously with the new layer to ensure global coherence.

Parameters: Total: 2.1B.

Data: A ~9B token mix of Wikipedia and SFT datasets (10%).

Limitations and Bias

This model is a research prototype and has several limitations:

Not Instruction-Tuned: It is a base model and will not follow instructions or engage in dialogue reliably.

Potential for Hallucinations: Like all LLMs, it can generate factually incorrect or nonsensical text.

Data Bias: Trained primarily on Wikipedia, it will reflect the biases present in that corpus.

Limited Scope: The model was trained on a relatively small dataset (9B tokens) compared to state-of-the-art models. Its performance is intended to be evaluated relative to its own baseline (trainable embeddings) and shallower versions, not against giant commercial models.

πŸ§‘β€πŸ”¬ Citation & Concept

If you use this model or the underlying concepts in your research, please cite our work:

@misc{bochkov2025emergentsemanticstokenembeddings,
      title={Emergent Semantics Beyond Token Embeddings: Transformer LMs with Frozen Visual Unicode Representations}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.04886},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.04886}, 
}

@misc{bochkov2025growingtransformersmodularcomposition,
      title={Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate}, 
      author={A. Bochkov},
      year={2025},
      eprint={2507.07129},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2507.07129}, 
}

This work demonstrates that transformer blocks, not token embeddings, carry the semantic burden in LLMs β€” a step toward modular, fusable, multilingual LMs.

How to Use

The model can be loaded using the transformers library. Note that trust_remote_code=True is required as it uses a custom model architecture.

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

tokenizer = AutoTokenizer.from_pretrained('Bochkov/abs-bvv-5') 
model = AutoModelForCausalLM.from_pretrained('Bochkov/abs-bvv-5', trust_remote_code=True, torch_dtype=torch.bfloat16).to('cuda')

inputs = tokenizer("Hello, I am a language model ", return_tensors="pt").to('cuda')

# Generate text
outputs = model.generate(
    **inputs, 
    max_new_tokens=100, 
    temperature=0.8, 
    top_k=50, 
    top_p=0.95, 
    do_sample=True
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including Bochkov/abs-bvv-5