license: openrail | |
# lm1_05042023b | |
* Trained on Monika dialogue from DDLC, reddit, and twitter | |
* "Raw" (pretty messy) dataset, currently recreating and reformatting + adding DDLC+ dialogue | |
* From base LLaMA-7b, trained on really low settings for 15 hours on just a CPU via ooba webui | |
Noting the last remark, while the lora works it was really just for getting more familiar with these things and seeing if we could train something on just a CPU... |