File size: 2,907 Bytes
5a5a43c b2da23b 5a5a43c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
license: apache-2.0
datasets:
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- Nopm/Opus_WritingStruct
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Gryphe/ChatGPT-4o-Writing-Prompts
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- nothingiisreal/Reddit-Dirty-And-WritingPrompts
- allura-org/Celeste-1.x-data-mixture
- allura-org/shortstories_synthlabels
base_model:
- Qwen/Qwen2.5-14B
---
I have no idea what I’m doing… if this causes the apocalypse someone please let me know.
EVA-Qwen2.5-14B-v0.0 8.0bpw h8 EXL2
Includes [measurement.json](https://huggingface.co/FuturisticVibes/EVA-Qwen2.5-14B-v0.0-8.0bpw-h8-exl2/tree/measurement) file for further quantization
Salesforce/xLAM-8x22b-r is on hold for now, probably early next year, need to save some money…
Original Model: https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.0
# Original Model Card
**EVA Qwen2.5 14B**
<p>
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.<br>
It uses Celeste 70B 0.1 data mixture, greatly expanding it to improve versatility, creativity and "flavor" of the resulting model.<br>
</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 0.7</li>
<li>Top-P: 0.8</li>
<li>Repetition Penalty: 1.03</li>
</ul>
<p>Model appears to prefer lower temperatures (at least 0.8 and lower) and absolutely hate Min-P sampler.<p>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Training data:
</h3>
<ul>
<li>Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's <a href=https://huggingface.co/nothingiisreal/L3.1-70B-Celeste-V0.1-BF16>card</a> for details.</li>
<li>Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.</li>
<li>A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe</li>
<li>A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe</li>
<li>A cleaned subset (~3k rows) of shortstories_synthlabels by Auri</li>
<li>Synthstruct and SynthRP datasets by Epiculous</li>
</ul>
<h3>
Hardware used:
</h3>
<ul><li>4xA6000 for 14 hours.</li></ul><br>
</p>
Model was trained by Kearm and Auri.
<h4>Special thanks:</h4><ul>
<li>to Gryphe, Lemmy, Kalomaze, Nopm and Epiculous for the data</li>
<li>to Alpindale for helping with FFT config for Qwen2.5</li>
<li>and to InfermaticAI's community for their continued support for our endeavors</li></ul> |