Wanton-Wolf-70B

User Discretion Advised

A furry finetune model based on L3.3-Cu-Mai-R1-70b, chosen for its exceptional features. *Tail swish*


✧ Quantized Formats


✧ Recommended Settings

  • Static Temperature: 1.0-1.05
  • Min P: 0.02
  • DRY Settings (optional):
    • Multiplier: 0.8
    • Base: 1.75
    • Length: 4

✧ Recommended Templates

The following templates are recommended from the original Cu-Mai model page, Adjust if needed:

LLam@ception by @.konnect
LeCeption by @Steel - A completely revamped XML version of Llam@ception 1.5.2 with stepped thinking and reasoning

LeCeption Reasoning Configuration:

Start Reply With:

'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'

Reasoning Formatting (no spaces):

  • Prefix: '<think>'
  • Suffix: '</think>'

✧ Credits

Model Author

Original Model Creator

  • @SteelSkull - Creator of the L3.3-Cu-Mai-R1-70b base model

Contributors ✨

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mawdistical/Wanton-Wolf-70B_EXL2_3.0bpw_H8

Collection including Mawdistical/Wanton-Wolf-70B_EXL2_3.0bpw_H8