AHN: Artificial Hippocampus Networks for Efficient Long-Context Modeling
Introduction
Artificial Hippocampus Networks (AHNs) transform lossless memory into fixed-size compressed representations for long-context modeling. Lossless memory (e.g., attention’s key-value (KV) cache) stores exact input information but grows with sequence length, making it inefficient for long sequences. In contrast, compressed memory (e.g., RNNs’ hidden state) maintains a constant size and offers fixed computational costs per input token, but this comes at the cost of information loss. To harness the benefits of both memory types, AHNs continually convert lossless memory outside the sliding attention window into compressed form. AHNs can be instantiated with any RNN-like architectures. The model then integrates both memory types to make predictions across long contexts.
This repository hosts the model weights for AHN. For installation, usage instructions, and further documentation, please visit our GitHub repository.
Method
Model Zoo
base model | AHN module | #params | checkpoint (AHN only) |
---|---|---|---|
Qwen2.5-3B-Instruct | Mamba2 | 119M | 🤗model |
Qwen2.5-3B-Instruct | DeltaNet | 118M | 🤗model |
Qwen2.5-3B-Instruct | GatedDeltaNet | 130M | 🤗model |
Qwen2.5-7B-Instruct | Mamba2 | 186M | 🤗model |
Qwen2.5-7B-Instruct | DeltaNet | 185M | 🤗model |
Qwen2.5-7B-Instruct | GatedDeltaNet | 213M | 🤗model |
Qwen2.5-14B-Instruct | Mamba2 | 514M | 🤗model |
Qwen2.5-14B-Instruct | DeltaNet | 511M | 🤗model |
Qwen2.5-14B-Instruct | GatedDeltaNet | 610M | 🤗model |
Evaluation
LV-Eval & InfiniteBench Results
LongBench Results
Contact
- Yunhao Fang: [email protected]
- Weihao Yu (corresponding author): [email protected]
Citation
BibTeX:
@article{fang2025artificial,
title={Artificial hippocampus networks for efficient long-context modeling},
author={Fang, Yunhao and Yu, Weihao and Zhong, Shu and Ye, Qinghao and Xiong, Xuehan and Wei, Lai},
journal={arXiv preprint arXiv:2510.07318},
year={2025}
}
- Downloads last month
- 30
Model tree for ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-3B
Base model
Qwen/Qwen2.5-3B