RichardErkhov commited on
Commit
ae702ec
·
verified ·
1 Parent(s): 3e4f0f9

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ InstructLM-1.3B - AWQ
11
+ - Model creator: https://huggingface.co/instruction-pretrain/
12
+ - Original model: https://huggingface.co/instruction-pretrain/InstructLM-1.3B/
13
+
14
+
15
+
16
+
17
+ Original model description:
18
+ ---
19
+ license: apache-2.0
20
+ datasets:
21
+ - tiiuae/falcon-refinedweb
22
+ - instruction-pretrain/ft-instruction-synthesizer-collection
23
+ - instruction-pretrain/general-instruction-augmented-corpora
24
+ language:
25
+ - en
26
+ ---
27
+ # Instruction Pre-Training: Language Models are Supervised Multitask Learners (EMNLP 2024)
28
+ This repo contains the **general models pre-trained from scratch** (on 100B tokens) in our paper [Instruction Pre-Training: Language Models are Supervised Multitask Learners](https://huggingface.co/papers/2406.14491).
29
+
30
+ We explore supervised multitask pre-training by proposing ***Instruction Pre-Training***, a framework that scalably augments massive raw corpora with instruction-response pairs to pre-train language models. The instruction-response pairs are generated by an efficient instruction synthesizer built on open-source models. In our experiments, we synthesize 200M instruction-response pairs covering 40+ task categories to verify the effectiveness of *Instruction Pre-Training*. Instruction Pre-Training* outperforms *Vanilla Pre-training* in both general pre-training from scratch and domain-adaptive continual pre-training. **In pre-training from scratch, *Instruction Pre-Training* not only improves pre-trained base models but also benefits more from further instruction tuning.** In continual pre-training, *Instruction Pre-Training* enables Llama3-8B to be comparable to or even outperform Llama3-70B.
31
+
32
+ <p align='center'>
33
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/vRdsFIVQptbNaGiZ18Lih.png" width="400">
34
+ </p>
35
+
36
+ ### [2024/11/29] 🤗 Introduce the multimodal version of instruction synthesizer at [AdaMLLM](https://huggingface.co/papers/2411.19930), for synthesizing visual instruction tasks 🤗
37
+
38
+ **************************** **Updates** ****************************
39
+ * 2024/11/30: Released the multimodal version of the instruction synthesizer: [Visual Instruction Synthesizer](https://huggingface.co/AdaptLLM/Adapt-MLLM-to-Domains)
40
+ * 2024/9/20: Our paper has been accepted by EMNLP 2024 main conference🎉
41
+ * 2024/9/11: Updated [FAQ on continual pre-training from Llama3](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
42
+ * 2024/8/29: Updated [guidelines](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B) on evaluating any 🤗Huggingface models on the domain-specific tasks
43
+ * 2024/7/31: Updated pre-training suggestions in the `Advanced Usage` section of [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
44
+ * 2024/7/15: We scaled up the pre-trained tokens from 100B to 250B, with the number of synthesized instruction-response pairs reaching 500M. The performance trend on downstream tasks throughout the pre-training process:
45
+ <p align='left'>
46
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66711d2ee12fa6cc5f5dfc89/0okCfRkC6uALTfuNxt0Fa.png" width="500">
47
+ </p>
48
+ * 2024/6/21: Released the [paper](https://huggingface.co/papers/2406.14491), [code](https://github.com/microsoft/LMOps), and [resources](https://huggingface.co/instruction-pretrain)
49
+
50
+ ## Resources
51
+ **🤗 We share our data and models with example usages, feel free to open any discussions at [this page](https://huggingface.co/papers/2406.14491)! 🤗**
52
+
53
+ - Thanks to the demo [davanstrien/instruction-synthesizer](https://huggingface.co/spaces/davanstrien/instruction-synthesizer) for implementing our approach
54
+ - Context-Based Instruction Synthesizer: [instruction-synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer)
55
+ - Fine-Tuning Data for the Synthesizer: [ft-instruction-synthesizer-collection](https://huggingface.co/datasets/instruction-pretrain/ft-instruction-synthesizer-collection)
56
+ - General Models Pre-Trained from Scratch (on 100B tokes):
57
+ - [InstructLM-500M](https://huggingface.co/instruction-pretrain/InstructLM-500M)
58
+ - [InstructLM-1.3B](https://huggingface.co/instruction-pretrain/InstructLM-1.3B)
59
+ - Domain-Specific Models Pre-Trained from Llama3-8B:
60
+ - [Finance-Llama3-8B](https://huggingface.co/instruction-pretrain/finance-Llama3-8B)
61
+ - [Biomedicine-Llama3-8B](https://huggingface.co/instruction-pretrain/medicine-Llama3-8B)
62
+ - General Instruction-Augmented Corpora: [general-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/general-instruction-augmented-corpora)
63
+ - Domain-Specific Instruction-Augmented Corpora (no finance data to avoid ethical issues): [medicine-instruction-augmented-corpora](https://huggingface.co/datasets/instruction-pretrain/medicine-instruction-augmented-corpora)
64
+
65
+
66
+ ## General Pre-Training From Scratch
67
+ We augment the [RefinedWeb corproa](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) with instruction-response pairs generated by our [context-based instruction synthesizer](https://huggingface.co/instruction-pretrain/instruction-synthesizer) to pre-train general langauge models from scratch.
68
+
69
+ To evaluate our general base model using the [lm-evaluation-harness framework](https://github.com/EleutherAI/lm-evaluation-harness)
70
+
71
+ 1. Setup dependencies:
72
+ ```bash
73
+ git clone https://github.com/EleutherAI/lm-evaluation-harness
74
+ cd lm-evaluation-harness
75
+ pip install -e .
76
+ ```
77
+
78
+ 2. Evalaute:
79
+ ```bash
80
+ MODEL=instruction-pretrain/InstructLM-1.3B
81
+ add_bos_token=True # this flag is needed because lm-eval-harness set add_bos_token to False by default, but ours require add_bos_token to be True
82
+
83
+ accelerate launch -m lm_eval --model hf \
84
+ --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
85
+ --gen_kwargs do_sample=False \
86
+ --tasks piqa,hellaswag,winogrande \
87
+ --batch_size auto \
88
+ --num_fewshot 0
89
+
90
+ accelerate launch -m lm_eval --model hf \
91
+ --model_args pretrained=${MODEL},add_bos_token=${add_bos_token},dtype=float16 \
92
+ --gen_kwargs do_sample=False \
93
+ --tasks social_iqa,ai2_arc,openbookqa,boolq,mmlu \
94
+ --batch_size auto \
95
+ --num_fewshot 5
96
+ ```
97
+
98
+ ## Citation
99
+ If you find our work helpful, please cite us:
100
+
101
+ [Instruction Pre-Training](https://huggingface.co/papers/2406.14491) (EMNLP 2024)
102
+ ```bibtex
103
+ @article{cheng2024instruction,
104
+ title={Instruction Pre-Training: Language Models are Supervised Multitask Learners},
105
+ author={Cheng, Daixuan and Gu, Yuxian and Huang, Shaohan and Bi, Junyu and Huang, Minlie and Wei, Furu},
106
+ journal={arXiv preprint arXiv:2406.14491},
107
+ year={2024}
108
+ }
109
+ ```
110
+
111
+ [Adapt LLM to Domains](https://huggingface.co/papers/2309.09530)(ICLR 2024)
112
+ ```bibtex
113
+ @inproceedings{
114
+ cheng2024adapting,
115
+ title={Adapting Large Language Models via Reading Comprehension},
116
+ author={Daixuan Cheng and Shaohan Huang and Furu Wei},
117
+ booktitle={The Twelfth International Conference on Learning Representations},
118
+ year={2024},
119
+ url={https://openreview.net/forum?id=y886UXPEZ0}
120
+ }
121
+ ```
122
+