Eldadalbajob commited on
Commit
134bacf
·
verified ·
1 Parent(s): c963155

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ language:
5
+ - en
6
+ - fr
7
+ - zh
8
+ - de
9
+ tags:
10
+ - programming
11
+ - code generation
12
+ - code
13
+ - codeqwen
14
+ - moe
15
+ - coding
16
+ - coder
17
+ - qwen2
18
+ - chat
19
+ - qwen
20
+ - qwen-coder
21
+ - Qwen3-30B-A3B-Thinking-2507
22
+ - Qwen3-30B-A3B
23
+ - mixture of experts
24
+ - 128 experts
25
+ - 8 active experts
26
+ - 256k context
27
+ - qwen3
28
+ - finetune
29
+ - brainstorm 20x
30
+ - brainstorm
31
+ - thinking
32
+ - reasoning
33
+ - uncensored
34
+ - abliterated
35
+ - qwen3_moe
36
+ - mlx
37
+ - mlx-my-repo
38
+ base_model: DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER
39
+ pipeline_tag: text-generation
40
+ ---
41
+
42
+ # Eldadalbajob/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-mlx-4Bit
43
+
44
+ The Model [Eldadalbajob/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-mlx-4Bit](https://huggingface.co/Eldadalbajob/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-mlx-4Bit) was converted to MLX format from [DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER](https://huggingface.co/DavidAU/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER) using mlx-lm version **0.26.4**.
45
+
46
+ ## Use with mlx
47
+
48
+ ```bash
49
+ pip install mlx-lm
50
+ ```
51
+
52
+ ```python
53
+ from mlx_lm import load, generate
54
+
55
+ model, tokenizer = load("Eldadalbajob/Qwen3-42B-A3B-2507-Thinking-Abliterated-uncensored-TOTAL-RECALL-v2-Medium-MASTER-CODER-mlx-4Bit")
56
+
57
+ prompt="hello"
58
+
59
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
60
+ messages = [{"role": "user", "content": prompt}]
61
+ prompt = tokenizer.apply_chat_template(
62
+ messages, tokenize=False, add_generation_prompt=True
63
+ )
64
+
65
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
66
+ ```