Text Generation
Transformers
PyTorch
Safetensors
English
mistral
text-generation-inference
File size: 2,108 Bytes
a71919b
133075f
 
 
 
a71919b
70ffe59
 
a71919b
 
133075f
a71919b
70ffe59
a71919b
70ffe59
a71919b
70ffe59
a71919b
133075f
a71919b
133075f
a71919b
133075f
 
 
a71919b
133075f
a71919b
133075f
 
 
a71919b
133075f
a71919b
133075f
a71919b
133075f
 
 
a71919b
133075f
a71919b
133075f
a71919b
133075f
a71919b
133075f
 
 
 
 
 
 
70ffe59
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---

## PDS-470M

[paper](https://arxiv.org/abs/2410.07064) | [code](https://github.com/microsoft/LMOps/tree/main/data_selection) | [project page](https://github.com/microsoft/LMOps/tree/main/data_selection)

**PDS-470M** is a 470M parameter Mistral architecture model **pretrained from scratch** using the PDS framework on data selected from the CC split of [Redpajama](https://github.com/togethercomputer/RedPajama-Data).

The PDS framework is based on the [Pontryagin's maximum principle](https://en.wikipedia.org/wiki/Pontryagin%27s_maximum_principle#:~:text=Pontryagin's%20maximum%20principle%20is%20used,the%20state%20or%20input%20controls.) for optimal pre-training data selection, offering strong theoretical support and scalability for training large language models. 

Please refer to our [paper](https://arxiv.org/abs/2410.07064) for more details.

### Overview of the theory:

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/Hdw83Vsb305GRlsqB7c34.png" width="700">
</p>

### Overview of the PDS framework:

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/YPwluLyZGK7DACH1WqDUN.png" width="700">
</p>

### Evaluation

PDS-selected data improves the performance of language models pre-trained from scratch and saves pre-training comptation. The improvement scales up to large model sizes.

<p align='left'>
    <img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/6undIr37d10qD73TDiPDK.png" width="600">
</p>

### Baseline

[Conventional Pre-training](https://huggingface.co/Data-Selection/BSL-470M)

### Citation

```bibtex
@article{gu2024data,
  title={Data Selection via Optimal Control for Language Models},
  author={Gu, Yuxian and Dong, Li and Wang, Hongning and Hao, Yaru and Dong, Qingxiu and Wei, Furu and Huang, Minlie},
  journal={arXiv preprint arXiv:2410.07064},
  year={2024}
}
```