BitMar 100M Token Model
This model was trained on exactly 100 million tokens for the BabyLM Challenge.
Key Features
BitNet Quantization: 1.58-bit quantized text encoder/decoder for efficient inference.
Vision Processing: Quantized Vision Transformer using pre-computed DiNOv2 features.
Episodic Memory: Human Brain inspired memory mechanism for learning, without fine-tuning.
Training Details
- Total tokens: 100,000,000
- Epochs completed: 10
- Tokens processed: 996,862,184
- Cross-modal similarity: 0.4552
Model Architecture
- Text encoder: 4 layers, 128 hidden size
- Vision encoder: DiNOv2 features compressed to 128
- Episodic memory: 32 slots
Usage
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("euhidaman/bitmar-attention-multimodal")
tokenizer = AutoTokenizer.from_pretrained("euhidaman/bitmar-attention-multimodal")
Training Status
- Status: Completed
- Tokens Processed: 996,862,184
- Best Cross-modal Similarity: 0.4552
- Downloads last month
- 1,040
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support