kazukifujii commited on
Commit
f578c9b
·
verified ·
1 Parent(s): 31c5b0f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.3
3
+ datasets:
4
+ - tokyotech-llm/swallow-code
5
+ language:
6
+ - en
7
+ - ja
8
+ base_model:
9
+ - meta-llama/Llama-3.1-8B
10
+ ---
11
+
12
+ # Model Card
13
+
14
+ <img src="https://huggingface.co/datasets/tokyotech-llm/swallow-math/resolve/main/figures/swallow-code-math-log.png" alt="SwallowCodeMath Icon" width="600">
15
+
16
+ <img src="https://huggingface.co/datasets/tokyotech-llm/swallow-code/resolve/main/assets/experiments.png" width="800">
17
+
18
+ ## Model Summary
19
+
20
+ This model is a continual pre-training of Llama-3.1-8B on the SwallowCode ablation and multilingual text datasets.
21
+ The model was trained to evaluate the performance of syntax-filtered Python code from The-Stack-v2 in the SwallowCode ablation experiments.
22
+
23
+ It was trained on 50 billion tokens using a mix of 16% SwallowCode (Experiment 2) and 84% multilingual text, following the setup described in the SwallowCode paper.
24
+
25
+ Training was performed using Megatron-LM.
26
+
27
+ ## Use
28
+
29
+ ### Intended Use
30
+
31
+ This model is intended for text completion in English and Japanese, with a focus on code generation tasks due to its training on syntax-error-free Python code from The-Stack-v2. It is part of the [SwallowCode ablation models](https://huggingface.co/collections/tokyotech-llm/swallowcode-6811c84ff647568547d4e443) (Experiment 2, exp2-syntax-error-filtered) and evaluates the effect of syntax error filtering in the SwallowCode pipeline. It is not instruction-tuned and is best suited for research purposes.
32
+
33
+ ### Generation
34
+
35
+ ```python
36
+ # pip install -q transformers
37
+ from transformers import AutoModelForCausalLM, AutoTokenizer
38
+
39
+ model = "tokyotech-llm/<model-name>"
40
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
41
+
42
+ tokenizer = AutoTokenizer.from_pretrained(model)
43
+ model = AutoModelForCausalLM.from_pretrained(model).to(device)
44
+
45
+ inputs = tokenizer.encode("def fibonacci(n):", return_tensors="pt").to(device)
46
+ outputs = model.generate(inputs, max_length=100)
47
+ print(tokenizer.decode(outputs[0]))
48
+ ```
49
+
50
+ ## Training
51
+
52
+ ### Model
53
+ - **Architecture**: Llama-3.1
54
+ - **Pretraining tokens**: 50B
55
+ - **Precision**: bfloat16
56
+ - **Sequence length**: 8,192
57
+ - **Tokenizer**: Llama-3 tokenizer
58
+
59
+
60
+ ### Data
61
+ The training mix consists of:
62
+
63
+ - 16% Code: Syntax-error-free Python subset of The-Stack-v2-train-smol-ids (8B tokens), from SwallowCode, Experiment 2.
64
+ - 84% Multilingual Text:
65
+ - Japanese Wikipedia (0.84B tokens)
66
+ - Japanese Swallow Corpus v2 (26.1B tokens)
67
+ - Laboro-ParaCorpus (0.22B tokens)
68
+ - English Wikipedia (1.1B tokens)
69
+ - English Cosmopedia (3.7B tokens)
70
+ - English DCLM (10.0B tokens)
71
+
72
+ Details are in the paper’s Appendix.
73
+
74
+ ### Hardware
75
+ - GPUs: 64 NVIDIA H100 (94GB)
76
+ - Interconnect: InfiniBand NDR200
77
+ - Supercomputer: TSUBAME, Institute of Science Tokyo
78
+
79
+ ### Software
80
+ - Megatron-LM (version core_r0.9.0) for training
81
+ - lm-evaluation-harness for evaluation
82
+ - BigCodeBench for code evaluation
83
+
84
+ ## Evaluation
85
+ The model was evaluated using the setup described in the SwallowCode paper, with the lm-evaluation-harness and BigCodeBench. Benchmarks include code generation (HumanEval, HumanEval+) and general tasks (OpenBookQA, TriviaQA, HellaSwag, SQuAD 2.0, XWINO, MMLU, GSM8K, BBH). Results are reported for checkpoints at 10B, 20B, 30B, 40B, and 50B tokens.
86
+
87
+ Evaluation Results (Experiment 2)
88
+
89
+ | Tokens (B) | OpenBookQA | TriviaQA | HellaSwag | SQuAD2.0 | XWINO | MMLU | GSM8K | BBH | HumanEval | HumanEval+ |
90
+ |------------|------------|----------|-----------|----------|-------|--------|--------|--------|-----------|------------|
91
+ | 10 | 0.3560 | 0.6675 | 0.6015 | 0.3385 | 0.9062| 0.6321 | 0.4784 | 0.5881 | 0.3604 | 0.3713 |
92
+ | 20 | 0.3520 | 0.6635 | 0.6026 | 0.3364 | 0.9049| 0.6252 | 0.4784 | 0.5781 | 0.3591 | 0.3585 |
93
+ | 30 | 0.3560 | 0.6637 | 0.6012 | 0.3375 | 0.9080| 0.6313 | 0.5019 | 0.5950 | 0.3701 | 0.3762 |
94
+ | 40 | 0.3580 | 0.6679 | 0.6046 | 0.3346 | 0.9062| 0.6330 | 0.5019 | 0.5998 | 0.3720 | 0.3689 |
95
+ | 50 | 0.3660 | 0.6694 | 0.6055 | 0.3340 | 0.9084| 0.6325 | 0.5155 | 0.6044 | 0.3787 | 0.3787 |
96
+
97
+ *Source: Table 3 from the SwallowCode paper, showing performance of the syntax-error-free Python subset.*
98
+
99
+
100
+ ## Citation
101
+
102
+ ```bibtex
103
+ @misc{fujii2025rewritingpretrainingdata,
104
+ title={Rewriting Pre-Training Data: Boosting LLM Performance in Math and Code},
105
+ author={Kazuki Fujii and Yukito Tajima and Sakae Mizuki and Hinari Shimada and Taihei Shiotani and Koshiro Saito and Masanari Ohi and Masaki Kawamura and Taishi Nakamura and Takumi Okamoto and Shigeki Ishida and Kakeru Hattori and Youmi Ma and Hiroya Takamura and Rio Yokota and Naoaki Okazaki},
106
+ year={2025},
107
+ eprint={XXXX.XXXXX},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CL},
110
+ url={https://arxiv.org/abs/XXXX.XXXXX},
111
+ }
112
+ ```