Could you do this merge for me lol?
I know, it sounds pathetic, but after struggling with dependencies, something called n_rot, head_dim, and then after fiddling with those to make the quant work, even with a straight passthrough, getting an output model with the wrong shape that won't run, I'd love you to make this model merges with this lora pretty please:
ChaoticNeutrals/Captain-Eris_Violet_Toxic-Magnum-12B
I've been dying for a 12b with reasoning as it's great but 8b is a little dumb for it I think.
You'd be a saviour. Thanks for everything else you do, you do great stuff!
Just merge ChaoticNeutrals/Captain-Eris_Violet_Toxic-Magnum-12B with Nitral-AI/Captain-Eris_Violet-GRPO-v0.420, since its a reasoning model. Both come from the same base model/reasoning model anyways.
Cool, thanks that's super helpful. Edit: Actually the GRPO is not very good at reasoning. Very short shallow reasoning sections.