|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: meta |
|
struct: |
|
- name: pile_set_name |
|
dtype: string |
|
- name: input_ids |
|
sequence: int32 |
|
- name: index |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 9180928 |
|
num_examples: 32 |
|
download_size: 3922400 |
|
dataset_size: 9180928 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
# LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs |
|
<p align="center"> |
|
π <a href="https://arxiv.org/abs/2503.02502" target="_blank">Paper</a> β’ π€ <a href="https://huggingface.co/collections/UltraRonin/ladm-68466cbccb652c8d828ca17e" target="_blank">HF Repo</a> |
|
</p> |
|
|
|
## π Table of Contents |
|
- [π Overview](#overview) |
|
- [π Preparation](#preparation) |
|
- [β³ Data Selection](#data_selection) |
|
- [π Training](#training) |
|
- [π Citation](#citation) |
|
|
|
|
|
<a name="overview"></a> |
|
|
|
## π Overview |
|
|
|
Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training. |
|
|
|
|
|
<a name="preparation"></a> |
|
|
|
## π Preparation |
|
|
|
### Data Preparation |
|
Please prepare long-context pre-training dataset truncated to 32k tokens in the following format, see [here](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy) for examples. |
|
``` |
|
DatasetDict({ |
|
train: Dataset({ |
|
features: ['text', 'meta', 'input_ids', 'index'], |
|
num_rows: 32 |
|
}) |
|
}) |
|
``` |
|
|
|
### Model Preparation |
|
You can use our [Long Attention Calculator](https://huggingface.co/UltraRonin/Long-Attn-Calculator) or other LLMs with long-context modeling capability. |
|
|
|
|
|
<a name="data_selection"></a> |
|
|
|
## β³ Data Selection |
|
|
|
If you run the following script with our [toy dataset](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy), you will get similar CDS scores in file [./toy_scores.json](https://github.com/ZNLP/LADM/blob/main/toy_scores.json). |
|
|
|
```bash |
|
bash launch_toy.sh |
|
``` |
|
|
|
For full usage: |
|
```bash |
|
bash launch.sh |
|
``` |
|
|
|
<a name="training"></a> |
|
|
|
## π Training |
|
|
|
Our training mainly follows [Huggingface Trainer](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) code base. Please refer to that repo for more details. |
|
|
|
|
|
<a name="citation"></a> |
|
|
|
## π Citation |
|
|
|
If you find this repo useful for your research, please consider citing the paper: |
|
``` |
|
@article{chen2025ladm, |
|
title={LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs}, |
|
author={Chen, Jianghao and Wu, Junhong and Xu, Yangyifan and Zhang, Jiajun}, |
|
journal={arXiv preprint arXiv:2503.02502}, |
|
year={2025} |
|
} |
|
``` |