File size: 3,526 Bytes
90504f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c6275e
 
 
 
 
 
 
 
 
7a6fe74
8c6275e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a6fe74
 
 
 
 
 
 
8c6275e
 
 
 
 
 
 
 
 
 
 
 
7a6fe74
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: meta
    struct:
    - name: pile_set_name
      dtype: string
  - name: input_ids
    sequence: int32
  - name: index
    dtype: int64
  splits:
  - name: train
    num_bytes: 9180928
    num_examples: 32
  download_size: 3922400
  dataset_size: 9180928
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
# LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs
<p align="center">
    πŸ“– <a href="https://arxiv.org/abs/2503.02502" target="_blank">Paper</a> β€’ πŸ€— <a href="https://huggingface.co/collections/UltraRonin/ladm-68466cbccb652c8d828ca17e" target="_blank">HF Repo</a>
</p>

## πŸ” Table of Contents
- [🌐 Overview](#overview)
- [πŸ“š Preparation](#preparation)
- [⏳ Data Selection](#data_selection)
- [πŸ“ˆ Training](#training)
- [πŸ“ Citation](#citation)


<a name="overview"></a>

## 🌐 Overview

Long-context modeling has drawn more and more attention in the area of Large Language Models (LLMs). Continual training with long-context data becomes the de-facto method to equip LLMs with the ability to process long inputs. However, it still remains an open challenge to measure the quality of long-context training data. To address this issue, we propose a **L**ong-context data selection framework with **A**ttention-based **D**ependency **M**easurement (**LADM**), which can efficiently identify high-quality long-context data from a large-scale, multi-domain pre-training corpus. LADM leverages the retrieval capabilities of the attention mechanism to capture contextual dependencies, ensuring a comprehensive quality measurement of long-context data. Experimental results show that our LADM framework significantly boosts the performance of LLMs on multiple long-context tasks with only 1B tokens for continual training.


<a name="preparation"></a>

## πŸ“š Preparation

### Data Preparation
Please prepare long-context pre-training dataset truncated to 32k tokens in the following format, see [here](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy) for examples.
```
DatasetDict({
    train: Dataset({
        features: ['text', 'meta', 'input_ids', 'index'],
        num_rows: 32
    })
})
```

### Model Preparation
You can use our [Long Attention Calculator](https://huggingface.co/UltraRonin/Long-Attn-Calculator) or other LLMs with long-context modeling capability.


<a name="data_selection"></a>

## ⏳ Data Selection

If you run the following script with our [toy dataset](https://huggingface.co/datasets/UltraRonin/pile-LlamaTokenizerFast-32k-truncated-toy), you will get similar CDS scores in file [./toy_scores.json](https://github.com/ZNLP/LADM/blob/main/toy_scores.json).

```bash
bash launch_toy.sh
```

For full usage:
```bash
bash launch.sh
```

<a name="training"></a>

## πŸ“ˆ Training

Our training mainly follows [Huggingface Trainer](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) code base. Please refer to that repo for more details.


<a name="citation"></a>

## πŸ“ Citation

If you find this repo useful for your research, please consider citing the paper:
```
@article{chen2025ladm,
  title={LADM: Long-context Training Data Selection with Attention-based Dependency Measurement for LLMs},
  author={Chen, Jianghao and Wu, Junhong and Xu, Yangyifan and Zhang, Jiajun},
  journal={arXiv preprint arXiv:2503.02502},
  year={2025}
}
```