Inquiry on the Composition of Pre-training Dataset for CodeQwen1.5-7B-Chat and How to Replicate

#29
by AshleyLL - opened

I am interested in understanding the composition of the pre-training dataset used for CodeQwen1.5-7B-Chat. Specifically, I would like to know:

  1. What are the primary sources or types of datasets that constitute the pre-training corpus for CodeQwen1.5-7B-Chat, and what are their approximate proportions within the overall corpus?"
  2. Are there any specific filtering or processing steps applied to these datasets before they are used for training?
  3. Is there a guide or documentation available on how to replicate the creation of this pre-training dataset?

Any insights, references, or guidance on where to find more information about this topic would be greatly appreciated.

Sign up or log in to comment