Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
this is a test.... trying to get Qwen3 30B to behave.
|
2 |
+
|
3 |
+
I started by fine-tuning
|
4 |
+
SuperbEmphasis/Black-Eclipse-Test-ERP-RP-v2 with my improved dataset.
|
5 |
+
|
6 |
+
however this time, I cranked the activated experts to 64/128 to force more parameters to get trained.
|
7 |
+
|
8 |
+
Then I cranked the experts back down to 24/128.
|
9 |
+
|
10 |
+
it is surprisingly coherent. the biggest issue I have run into so far is that it REALLY doesnt like roleplay scenarios where '{{char}}' can be multiple characters. It wants to stay as the single initial characters
|
11 |
+
This tells me that I need more scenario based examples...
|