A fine tune of LFM 2.5 350M on the datasets:
The model was trained for 60 steps using Unsloth in google colab using the LFM docs.
Train loss graph:
Chat template
Files info
Base model