nbeerbower's picture
Update README.md
d8d7a90 verified
|
raw
history blame
1.08 kB
---
library_name: transformers
tags:
- trl
- sft
license: apache-2.0
datasets:
- HuggingFaceTB/smoltalk
base_model:
- nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated
---
![image/png](https://huggingface.co/nbeerbower/SmolNemo-12B/resolve/main/smolnemo_cover.png?download=true)
> 🧪 **Just Another Model Experiment**
>
> This is one of many experimental iterations I'm sharing publicly while I mess around with training parameters and ideas. It's not a "real" release - just me being transparent about my learning process. Feel free to look under the hood, but don't expect anything production-ready!
# SmolNemo-12B-FFT-experimental
[Mahou-1.5-mistral-nemo-12B-lorablated](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) finetuned on [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk).
**This model has erratic behavior and poor performance**
### Method
SFT with 8x A100 for 0.1 epochs.
This was a full finetune. I think the issues with the model can be chalked up to conflicts with Mistral Instruct and ChatML.