text
stringlengths 63.4k
470k
| meta
dict | input_ids
sequencelengths 32.8k
32.8k
| index
int64 0
31
|
---|---|---|---|
"---\nabstract: 'The aim of this paper is to establish a global asymptotic equivalence between the e(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,11474,13,16595,29901,525,1576,12242,310,445,5650,338,304,10127,263,5534,22784,13574,24796,1546,27(...TRUNCATED) | 0 |
"---\nabstract: |\n We give a general construction of debiased/locally robust/orthogonal (LR) mom(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,11474,13,16595,29901,891,13,1678,1334,2367,263,2498,7632,310,2553,29875,1463,29914,2029,635,16424(...TRUNCATED) | 1 |
"Background\n==========\n\nPolysaccharide-rich fungi and plants have been employed for centuries by (...TRUNCATED) | {
"pile_set_name": "PubMed Central"
} | [1,16585,13,4936,1360,13,13,7713,952,562,3090,680,29899,4018,26933,29875,322,18577,505,1063,15723,36(...TRUNCATED) | 2 |
"---\nabstract: 'We continue our study of Cartan schemes and their Weyl groupoids. The results in th(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,11474,13,16595,29901,525,4806,6773,1749,6559,310,12370,273,27715,322,1009,399,1032,29880,867,283,(...TRUNCATED) | 3 |
"IFUP-TH 2013/21\n\n1.4truecm\n\n[**Background Field Method,**]{}\n\n.5truecm\n\n[**Batalin-Vilkovis(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,10762,4897,29899,4690,29871,29906,29900,29896,29941,29914,29906,29896,13,13,29896,29889,29946,300(...TRUNCATED) | 4 |
"Timeline of United States inventions (1890β1945)\n\nA timeline of United States inventions (1890(...TRUNCATED) | {
"pile_set_name": "Wikipedia (en)"
} | [1,7870,5570,310,3303,3900,11817,1080,313,29896,29947,29929,29900,29994,29896,29929,29946,29945,2989(...TRUNCATED) | 5 |
"// This file was generated by go generate; DO NOT EDIT\n\npackage currency\n\nimport \"golang.org/x(...TRUNCATED) | {
"pile_set_name": "Github"
} | [1,849,910,934,471,5759,491,748,5706,29936,11662,6058,11488,13,13,5113,27550,13,13,5215,376,29887,32(...TRUNCATED) | 6 |
"---\nabstract: 'This issue of *Statistical Science* draws its inspiration from the work of James M.(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,11474,13,16595,29901,525,4013,2228,310,334,9513,391,936,9327,29930,4216,29879,967,8681,12232,515,(...TRUNCATED) | 7 |
"epsf.sty 220 mm 145 mm 0.5 mm 1 mm 8 mm\n\n[M. Nieto-Vesperinas[^1] and J. R. Arias-GonzΓ‘lez[^2]]{(...TRUNCATED) | {
"pile_set_name": "ArXiv"
} | [1,321,567,29888,29889,22062,29871,29906,29906,29900,5654,29871,29896,29946,29945,5654,29871,29900,2(...TRUNCATED) | 8 |
"1. Introduction {#sec1-sensors-20-02641}\n===============\n\nIn recent years, the field of indoor l(...TRUNCATED) | {
"pile_set_name": "PubMed Central"
} | [1,29871,29896,29889,27576,426,29937,3471,29896,29899,23149,943,29899,29906,29900,29899,29900,29906,(...TRUNCATED) | 9 |
LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs
π Table of Contents
π Overview
Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a Long-context data selection framework with Attention-based Dependency Measurement (LADM), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.
π Preparation
Data Preparation
Please prepare long-context pre-training dataset truncated to 32k tokens in the following format, see here for examples.
DatasetDict({
train: Dataset({
features: ['text', 'meta', 'input_ids', 'index'],
num_rows: 32
})
})
Model Preparation
You can use our Long Attention Calculator or other LLMs with long-context modeling capability.
β³ Data Selection
If you run the following script with our toy dataset, you will get similar CDS scores in file ./toy_scores.json.
bash launch_toy.sh
For full usage:
bash launch.sh
π Training
Our training mainly follows Huggingface Trainer code base. Please refer to that repo for more details.
π Citation
If you find this repo useful for your research, please consider citing the paper:
@article{chen2025ladm,
title={LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs},
author={Chen, Jianghao and Wu, Junhong and Xu, Yangyifan and Zhang, Jiajun},
journal={arXiv preprint arXiv:2503.02502},
year={2025}
}
- Downloads last month
- 37