Post
1169
NEW MODEL:
vanta-research/mox-small-1
Mox-Small-1 has landed on the Hub!
Finetuned from the fantastic Olmo3.1 32B architecture by AllenAI, Mox-Small-1 was trained using the same datasets and methodology as Mox-Tiny-1, making this model our second addition to the Mox-1 family of models.
Mox-1 is designed to prioritize clarity, honesty, and genuine utility over blind agreement. These models are perfect for when you want to be challenged in a constructive, helpful way.
By utilizing Olmo3.1 32B's architecture, Mox-Small-1 brings greater conversational depth and reasoning quality to the Mox-1 model family. Check it out!
Mox-Small-1 has landed on the Hub!
Finetuned from the fantastic Olmo3.1 32B architecture by AllenAI, Mox-Small-1 was trained using the same datasets and methodology as Mox-Tiny-1, making this model our second addition to the Mox-1 family of models.
Mox-1 is designed to prioritize clarity, honesty, and genuine utility over blind agreement. These models are perfect for when you want to be challenged in a constructive, helpful way.
By utilizing Olmo3.1 32B's architecture, Mox-Small-1 brings greater conversational depth and reasoning quality to the Mox-1 model family. Check it out!