kazukifujii commited on
Commit
94e273e
·
verified ·
1 Parent(s): 758434c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ base_model:
20
  This model is a continual pre-training of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on a mix of mathematical datasets from [SwallowMath](https://huggingface.co/datasets/tokyotech-llm/swallow-math) and multilingual text datasets.
21
  The model was trained to evaluate the performance of mathematical reasoning and problem-solving as part of the SwallowMath ablation experiments (experiment 2).
22
 
23
- It was trained on **50 billion tokens** using a mix of 4.8% SwallowMath (finemath-4+ rewritten) , 13.1% Code, and 82% multilingual text, following the setup described in the [SwallowMath paper](https://arxiv.org/abs/XXXX.XXXXX).
24
  Training was performed using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/core_r0.9.0).
25
 
26
  ## Use
 
20
  This model is a continual pre-training of [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on a mix of mathematical datasets from [SwallowMath](https://huggingface.co/datasets/tokyotech-llm/swallow-math) and multilingual text datasets.
21
  The model was trained to evaluate the performance of mathematical reasoning and problem-solving as part of the SwallowMath ablation experiments (experiment 2).
22
 
23
+ It was trained on **50 billion tokens** using a mix of 4.8% SwallowMath (finemath-4+ rewritten) , 13.1% Code, and 82% multilingual text, following the setup described in the [SwallowMath paper](https://arxiv.org/abs/2505.02881).
24
  Training was performed using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM/tree/core_r0.9.0).
25
 
26
  ## Use