Other models?
First, thank you for this distillation, its been a lot of fun working with it.
Curious to know if you plan on opening up the entire process and dataset so other architecture models could be distilled in a similar way? LFM2 2.6B is reporting performance close to 4B models and it would be interesting to see the same process implemented with it to greater enhance the reach to CPU bound enthusiasts.
Thanks for reading, appreciate your work.
Thank you for your support! π
While currently not planned, opening the dataset could be indeed very helpful to some, probably without the mixed in gpt OSS 120b outputs though. README will be updated soon but a large portion of the dataset is based on andyrdt/gpt-oss-20b-rollouts but highly filtered and reformatted.
Quite a bit of information was already shared in the README, training was done using unsloth for more optimized fine tuning, other settings do not matter as much as it changes between model to model and what kind of system you are using for fine tuning.
As for other smaller models, it is theoretically possible and will be looked into, not sure how LFM would perform, though I did test it a little and it was okay. Other unrelated models are currently planned but after they are done I will try to do a smaller run on lfm2 2.6b.
Edit:
Just to add, tests on other architectures was done, we continually pretrained and then did sft on a custom 6b moe too but it performed very poorly compared to this model.
dataset uploaded
Still amazes me that behavior patterns can be distilled and transferred.