UltraRonin's picture
Update README.md
7a6fe74 verified
metadata
dataset_info:
  features:
    - name: text
      dtype: string
    - name: meta
      struct:
        - name: pile_set_name
          dtype: string
    - name: input_ids
      sequence: int32
    - name: index
      dtype: int64
  splits:
    - name: train
      num_bytes: 9180928
      num_examples: 32
  download_size: 3922400
  dataset_size: 9180928
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs

πŸ“– Paper β€’ πŸ€— HF Repo

πŸ” Table of Contents

🌐 Overview

Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a Long-context data selection framework with Attention-based Dependency Measurement (LADM), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.

πŸ“š Preparation

Data Preparation

Please prepare long-context pre-training dataset truncated to 32k tokens in the following format, see here for examples.

DatasetDict({
    train: Dataset({
        features: ['text', 'meta', 'input_ids', 'index'],
        num_rows: 32
    })
})

Model Preparation

You can use our Long Attention Calculator or other LLMs with long-context modeling capability.

⏳ Data Selection

If you run the following script with our toy dataset, you will get similar CDS scores in file ./toy_scores.json.

bash launch_toy.sh

For full usage:

bash launch.sh

πŸ“ˆ Training

Our training mainly follows Huggingface Trainer code base. Please refer to that repo for more details.

πŸ“ Citation

If you find this repo useful for your research, please consider citing the paper:

@article{chen2025ladm,
  title={LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs},
  author={Chen, Jianghao and Wu, Junhong and Xu, Yangyifan and Zhang, Jiajun},
  journal={arXiv preprint arXiv:2503.02502},
  year={2025}
}