Thai, Japanese, Korea, and Vietnamese in SmolLM3?

#8
by wannaphong - opened

Hello! I was read the pretraining config at https://huggingface.co/datasets/HuggingFaceTB/smollm3-configs. I found you was trained Thai, Japanese, Korea, and Vietnamese languages in the config but I see your instruction model isn't support their languages.

Is it not working with those languages (too small) or you don't have instruction dataset to try?

Hugging Face Smol Models Research org
β€’
edited Aug 6

We did not generate instruct data for those languages, so the instruct model doesn't support them (they also weren't upsampled in pretraining when compared to the official languages). But we did include them to open the possibility for continual pretraining.

We did not generate instruct data for those languages, so the instruct model doesn't support them (they also weren't upsampled in pretraining when compared to the official languages). But we did include them to open the possibility for continual pretraining.

Thank you! I was try to trained base model with Thai instruction dataset. The output is repetitions while generating (but it can still give good output). so I think I may continual pretraining the model if I have the resource :(.

@loubnabnl Could you check if Thai FineWeb2 in the config was trained on the full dataset or not? If it trained the full Thai subset in FineWeb2, I think I can use a few resources like the full Thai Wikipedia to do CPT model.

Sign up or log in to comment