Join our Discord! https://discord.gg/Nbv9pQ88Xb

More than 5500 members of helpful, LLM enthusiasts! A hub for players and makers alike!

We need testers!


Live in OpenRouter! (Powered by Parasail.io)


Drummer proudly presents...

Valkyrie 49B v1

image/png

Description

it swears unprompted 10/10 model

... characters work well, groups work well, scenarios also work really well so great model overall

This is pretty exciting though. GLM-4 already had me on the verge of deleting all of my other 32b and lower models. I got to test this more but I think this model at Q3m is the death blow lol

Smart Nemotron 49b learned how to roleplay

Even without thinking it rock solid at 4qm.

Without thinking is like 40-70b level. With thinking is 100+b level

This model would have been AGI if it were named properly with a name like "Bob". Alas, it was not.

I think this model is nice. It follows prompts very well. I didn't really note any major issues or repetition

Yeah this is good. I think its clearly smart enough, close to the other L3.3 70b models. It follows directions and formatting very well. I asked it to create the intro message, my first response was formatted differently, and it immediately followed my format on the second message. I also have max tokens at 2k cause I like the model to finish it's thought. But I started trimming the models responses when I felt the last bit was unnecessary and it started replying closer to that length. It's pretty much uncensored.

Nemotron is my favorite model, and I think you fixed it!!

Usage

  • Llama 3 Chat Template
  • <think> capable upon prefill or detailed thinking on on top of the system prompt

Links

Special Thanks

  • Thank you to the testers at BeaverAI! You da MVP!
  • Thank you to each and everyone who donated and subscribed in Patreon and Ko-Fi to make our venture a little bit easier.
  • Subscribe to my Patreon!

config-v1a

Downloads last month
6,351
GGUF
Model size
49.9B params
Architecture
deci
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TheDrummer/Valkyrie-49B-v1-GGUF

Quantized
(14)
this model