image/jpeg

Wayfarer-Large-70B-Llama-3.3

We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on.

Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun!

However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues.

The Wayfarer model series are a set of adventure role-play models specifically trained to give players a challenging and dangerous experience.

We wanted to contribute back to the open source community that we’ve benefitted so much from so we open sourced a 12b parameter version version back in Jan. We thought people would love it but people were even more excited than we expected.

Due to popular request we decided to train a larger 70b version based on Llama 3.3.

After letting users try this model AI Dungeon players were obsessed, sending us an endless stream of memes with an almost cult like fervor. So we think you’ll probably like it too.

We’ve decided to open-source this model as well so anyone can experience unforgivingly brutal AI adventures! Anyone can download the model to run locally.

Quantized GGUF weights can be downloaded here.

Or if you want to try this model without running it yourself, you can do so at https://aidungeon.com (Wayfarer Large requires a subscription while Wayfarer Small is free).

We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.

Model details

Wayfarer Large 70B was trained directly on top of Llama 3.3 70B Instruct with a 33/33/33 mixture of 8k text adventure data, 4k roleplay data and a SlimOrca Sonnet subset. This instruct subset proved to be critical to effective training, emphasizing the difference between instruct and fiction and further amplifying the negative sentiment of the Wayfarer data.

How It Was Made

Wayfarer’s text adventure data was generated by simulating playthroughs of published character creator scenarios from AI Dungeon. Five distinct user archetypes played through each scenario, whose character starts all varied in faction, location, etc. to generate five unique samples.

One language model played the role of narrator, with the other playing the user. They were blind to each other’s underlying logic, so the user was actually capable of surprising the narrator with their choices. Each simulation was allowed to run for 8k tokens or until the main character died.

Wayfarer’s general emotional sentiment is one of pessimism, where failure is frequent and plot armor does not exist. This serves to counter the positivity bias so inherent in our language models nowadays.

Since our original 12B release this data has been further improved and regenerated from the ground up, minimizing repetition and maximizing narrative flow.

Inference

The Llama architecture can handle a large variety of settings, with the following serving as a baseline:

"temperature": 0.8,
"repetition_penalty": 1.05,
"min_p": 0.05

Limitations

Wayfarer was trained exclusively on second-person present tense data (using “you”) in a narrative style, but the larger model size and the underlying instruct layer should be more then capable of compensating for this, if so desired.

Prompt Format

This model follows the original Llama 3 prompt format.

<|start_header_id|>system<|end_header_id|>

You're a masterful storyteller and gamemaster. Write in second person present tense (You are), crafting vivid, engaging narratives with authority and confidence.<|eot_id|>
<|start_header_id|>user<|end_header_id|>

> You peer into the darkness.<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>

You have been eaten by a grue.

GAME OVER<|eot_id|>

Credits

Thanks to Gryphe Padar for collaborating on this finetune with us!

Downloads last month
129
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for LatitudeGames/Wayfarer-Large-70B-Llama-3.3

Finetuned
(127)
this model
Merges
4 models
Quantizations
6 models