Question About Expanding Ultra-FineWeb to 8T Tokens for MiniCPM 4
I understand that Ultra-FineWeb serves as the core pre-training dataset for the MiniCPM 4 series, with a total of approximately 1.1T tokens. I'm curious about how this dataset is expanded to meet the model's requirement of around 8T tokens during training.
Could anyone share the strategies or methods typically used for such data scaling in the context of MiniCPM 4?
Thank you!
Thank you for your interest for Ultra-Fineweb dataset!
To meet the model's training requirement of around 8T tokens, we applied the same data processing pipeline to clean and process our internal datasets beyond Ultra-FineWeb.
These internal datasets underwent the same quality control and processing standards as Ultra-FineWeb to ensure high quality across the entire training corpus. However, we currently do not plan to open-source this additional internal data.
Hope this answers your question! Feel free to reach out if you have any other questions about MiniCPM 4.