FiditeNemini's picture
Update README.md
ba9c863 verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - mkurman/Qwen2.5-14B-DeepSeek-R1-1M
  - huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2
library_name: mlx
tags:
  - merge
  - text-generation-inference
  - code

An MLX bfloat16 FP16 model, 1M context length, uncensored.

Model Merge: DeepSeek-R1-Distill-Qwen-14B-abliterated-v2

Description:
This model is a tie merge of "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2" and "mkurman/Qwen2.5-14B-DeepSeek-R1-1M".

Recipes

Model Recipe:

  - model: "huihui-ai/DeepSeek-R1-Distill-Qwen-14B-abliterated-v2"
    parameters:
      weight: 1
      density: 1

merge_method: ties
base_model: "mkurman/Qwen2.5-14B-DeepSeek-R1-1M"
parameters:
  density: 1
  normalize: true
  int8_mask: true
dtype: bfloat16

Merged Model:
The model was merged using the "ties" method.

Conversion to MLX Format

The model was converted to the MLX format with brainfloat16 precision using the following command:

mlx_lm.convert --hf-path FiditeNemini/Qwen2.5-14B-DeepSeek-R1-1M --mlx-path ./Unhinged-Qwen2.5-R1.bf16 --dtype bfloat16 -q --q-bits 16

Usage Example

You can use this model with the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("FiditeNemini/Unhinged-Qwen2.5-R1.bf16")
tokenizer = AutoTokenizer.from_pretrained("FiditeNemini/Unhinged-Qwen2.5-R1.bf16")

Details

  • Model type: CausalLM
  • Context length: 4096 tokens
  • License: Apache 2.0

Keywords

  • DeepSeek-R1-Distill
  • Qwen2.5
  • Abliterated
  • LLM
  • 1M context