Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,10 @@ Tags:
|
|
13 |
base_model:
|
14 |
- Qwen/Qwen3-235B-A22B
|
15 |
---
|
|
|
|
|
|
|
|
|
16 |
It's an SFT ontop of the largest Qwen which nobody seems to have done yet, Trained with a collection of normal Austral(Books, RP Logs, LNs, etc) datasets. I do not totally endorse the model yet and i think there's much work to be done in trying to make a decensored and well-writing finetune of this model but I just released this to give everyone a slight taste of a qwen3 finetune.
|
17 |
|
18 |
It was also a way for us to test out some Optims to actually get this model to train, Thanks to Intervitens <3
|
|
|
13 |
base_model:
|
14 |
- Qwen/Qwen3-235B-A22B
|
15 |
---
|
16 |
+
|
17 |
+
[English](./non-lore-README.md) | [Francais](./French-README.md)
|
18 |
+
|
19 |
+
|
20 |
It's an SFT ontop of the largest Qwen which nobody seems to have done yet, Trained with a collection of normal Austral(Books, RP Logs, LNs, etc) datasets. I do not totally endorse the model yet and i think there's much work to be done in trying to make a decensored and well-writing finetune of this model but I just released this to give everyone a slight taste of a qwen3 finetune.
|
21 |
|
22 |
It was also a way for us to test out some Optims to actually get this model to train, Thanks to Intervitens <3
|