HawkonLi commited on
Commit
45e02cb
·
verified ·
1 Parent(s): 26e4b09

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -5
README.md CHANGED
@@ -1,5 +1,56 @@
1
- ---
2
- license: other
3
- license_name: tencent-license
4
- license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/resolve/main/LICENSE.txt
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+
3
+ license: other
4
+ license_name: tencent-license
5
+ license_link: https://huggingface.co/tencent/Tencent-Hunyuan-Large/resolve/main/LICENSE.txt
6
+ language:
7
+ en
8
+ base_model:
9
+ tencent/Tencent-Hunyuan-Large
10
+
11
+ ---
12
+
13
+ # l-haok/Hunyuan-A52B-Instruct-2bit
14
+
15
+ # Introduction
16
+
17
+ This Model was converted to MLX format from [tencent-community/Hunyuan-A52B-Instruct](https://huggingface.co/tencent-community/Hunyuan-A52B-Instruct)
18
+
19
+ **mlx-lm version:** **0.21.0**
20
+
21
+ **convert-parameter:**
22
+
23
+ q_group_size: 128
24
+
25
+ q_bits: 2
26
+
27
+ Based on testing, this model can **BARELY** run local inference on a **MacBook Pro 16-inch (M3 Max, 128GB RAM)** . The following command must be executed before running the model:
28
+
29
+ ```bash
30
+ sudo sysctl iogpu.wired_limit_mb=105000
31
+ ```
32
+
33
+ > [!NOTE] This command requires macOS 15.0 or higher to work.
34
+
35
+ This model requires 104,259 MB of memory, which is close to the maximum recommended size of 98,384 MB on the M3 Max with 128GB RAM, but it does fit. Therefore, the command above is used to increase the system's wired memory limit. Please note, this may cause unexpected system lag or interruptions.
36
+
37
+ ## Use with mlx
38
+
39
+ ```bash
40
+ pip install mlx-lm
41
+ ```
42
+
43
+ ```python
44
+ from mlx_lm import load, generate
45
+
46
+ model, tokenizer = load("l-haok/Hunyuan-A52B-Instruct-2bit", tokenizer_config={"eos_token": "<|endoftext|>", "trust_remote_code": True},lazy=True)
47
+ prompt = "蓝牙耳机坏了,该去看牙科还是耳科"
48
+
49
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
50
+ messages = [{"role": "user", "content": prompt}]
51
+ prompt = tokenizer.apply_chat_template(
52
+ messages, tokenize=False, add_generation_prompt=True
53
+ )
54
+
55
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
56
+ ```