--- dataset_info: features: - name: text dtype: string - name: meta struct: - name: pile_set_name dtype: string - name: input_ids sequence: int32 - name: index dtype: int64 splits: - name: train num_bytes: 9180928 num_examples: 32 download_size: 3922400 dataset_size: 9180928 configs: - config_name: default data_files: - split: train path: data/train-* --- # LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs
## 🔍 Table of Contents - [🌐 Overview](#overview) - [📚 Preparation](#preparation) - [⏳ Data Selection](#data_selection) - [📈 Training](#training) - [📝 Citation](#citation) ## 🌐 Overview Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training. ## 📚 Preparation ### Data Preparation Please prepare long-context pre-training dataset truncated to 32k tokens in the following format, see [here](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy) for examples. ``` DatasetDict({ train: Dataset({ features: ['text', 'meta', 'input_ids', 'index'], num_rows: 32 }) }) ``` ### Model Preparation You can use our [Long Attention Calculator](https://huggingface.co/UltraRonin/Long-Attn-Calculator) or other LLMs with long-context modeling capability. ## ⏳ Data Selection If you run the following script with our [toy dataset](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy), you will get similar CDS scores in file [./toy_scores.json](https://github.com/ZNLP/LADM/blob/main/toy_scores.json). ```bash bash launch_toy.sh ``` For full usage: ```bash bash launch.sh ``` ## 📈 Training Our training mainly follows [Huggingface Trainer](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) code base. Please refer to that repo for more details. ## 📝 Citation If you find this repo useful for your research, please consider citing the paper: ``` @article{chen2025ladm, title={LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs}, author={Chen, Jianghao and Wu, Junhong and Xu, Yangyifan and Zhang, Jiajun}, journal={arXiv preprint arXiv:2503.02502}, year={2025} } ```