AHN: Artificial Hippocampus Networks for Efficient Long-Context Modeling

Introduction

Artificial Hippocampus Networks (AHNs) transform lossless memory into fixed-size compressed representations for long-context modeling. Lossless memory (e.g., attention’s key-value (KV) cache) stores exact input information but grows with sequence length, making it inefficient for long sequences. In contrast, compressed memory (e.g., RNNs’ hidden state) maintains a constant size and offers fixed computational costs per input token, but this comes at the cost of information loss. To harness the benefits of both memory types, AHNs continually convert lossless memory outside the sliding attention window into compressed form. AHNs can be instantiated with any RNN-like architectures. The model then integrates both memory types to make predictions across long contexts.

This repository hosts the model weights for AHN. For installation, usage instructions, and further documentation, please visit our GitHub repository.

Method

**(a)** Illustration of the model augmented with Artificial Hippocampus Networks (AHNs). In this example, the sliding window length is 3. When the input sequence length is less than or equal to the window length, the model operates identically to a standard Transformer. For longer sequences, AHNs continually compress the token outside the window into a compact memory representation. The model then utilizes both the lossless information within window, and the compressed memory to generate the next token. **(b)** Self-distillation training framework of AHNs based on an open-weight LLM. During training, the base LLM's weights are frozen, and only the AHNs' parameters are trained.

Model Zoo

base model AHN module #params checkpoint (AHN only)
Qwen2.5-3B-Instruct Mamba2 119M 🤗model
Qwen2.5-3B-Instruct DeltaNet 118M 🤗model
Qwen2.5-3B-Instruct GatedDeltaNet 130M 🤗model
Qwen2.5-7B-Instruct Mamba2 186M 🤗model
Qwen2.5-7B-Instruct DeltaNet 185M 🤗model
Qwen2.5-7B-Instruct GatedDeltaNet 213M 🤗model
Qwen2.5-14B-Instruct Mamba2 514M 🤗model
Qwen2.5-14B-Instruct DeltaNet 511M 🤗model
Qwen2.5-14B-Instruct GatedDeltaNet 610M 🤗model

Evaluation

LV-Eval & InfiniteBench Results

LongBench Results

Contact

Citation

BibTeX:

@article{fang2025artificial,
  title={Artificial hippocampus networks for efficient long-context modeling},
  author={Fang, Yunhao and Yu, Weihao and Zhong, Shu and Ye, Qinghao and Xiong, Xuehan and Wei, Lai},
  journal={arXiv preprint arXiv:2510.07318},
  year={2025}
}
Downloads last month
30
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-3B

Base model

Qwen/Qwen2.5-3B
Finetuned
(267)
this model

Collection including ByteDance-Seed/AHN-DN-for-Qwen-2.5-Instruct-3B