Tifa-Deepseek-14b-CoT-Chat

A huggingface format conversion of the GGUF from here: https://huggingface.co/ValueFX9507/Tifa-Deepsex-14b-CoT

For merging, requantizing, finetuning and such. A partial translation of the model card:

Standard data training, mature RL strategy, additional anti-duplicate reinforcement learning, suitable for normal use, normal output text quality, and divergent thinking in a few cases.

  • Incremental training of 0.4T novel content

  • 100K SFT data generated by TifaMax, 10K SFT data generated by DeepseekR1, 2K high-quality artificial data

  • 30K DPO reinforcement learning data generated by TifaMax to prevent duplication, enhance contextual association, and improve political security

  • 16k ultra-long context training

  • Random truncation training enhances robustness

  • 8×H20 GPU full-scale fine-tuning

Personal observations:

Don't let the DeepSex name fool you. This model is strong at SFW, English, long form (>32K context) storywriting, especially for a 14B, with good comprehension of the whole plot, details and the current state of the story. This is interesting, as it was "only" trained at 16K and (seemingly) mostly in Chinese.

Subjectively, the "crazy" version feels a little stronger, hence I am mostly testing with that.

Downloads last month
18
Safetensors
Model size
14.8B params
Tensor type
F16
·
Video Preview
loading

Model tree for Downtown-Case/Tifa-Deepsex-14b-CoT-Chat-HF

Finetuned
(2)
this model