cnfusion commited on
Commit
a079dda
·
verified ·
1 Parent(s): 1df0e35

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +117 -0
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - bigcode/the-stack
5
+ - bigcode/the-stack-v2
6
+ - bigcode/starcoderdata
7
+ - bigcode/commitpack
8
+ library_name: transformers
9
+ tags:
10
+ - code
11
+ - mlx
12
+ - mlx-my-repo
13
+ base_model: JetBrains/Mellum-4b-sft-python
14
+ model-index:
15
+ - name: Mellum-4b-sft-python
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ dataset:
20
+ name: RepoBench 1.1 (Python)
21
+ type: tianyang/repobench_python_v1.1
22
+ metrics:
23
+ - type: exact_match
24
+ value: 0.2837
25
+ name: EM
26
+ verified: false
27
+ - type: exact_match
28
+ value: 0.2987
29
+ name: EM ≤ 8k
30
+ verified: false
31
+ - type: exact_match
32
+ value: 0.2924
33
+ name: EM
34
+ verified: false
35
+ - type: exact_match
36
+ value: 0.306
37
+ name: EM
38
+ verified: false
39
+ - type: exact_match
40
+ value: 0.2977
41
+ name: EM
42
+ verified: false
43
+ - type: exact_match
44
+ value: 0.268
45
+ name: EM
46
+ verified: false
47
+ - type: exact_match
48
+ value: 0.2543
49
+ name: EM
50
+ verified: false
51
+ - task:
52
+ type: text-generation
53
+ dataset:
54
+ name: SAFIM
55
+ type: gonglinyuan/safim
56
+ metrics:
57
+ - type: pass@1
58
+ value: 0.4212
59
+ name: pass@1
60
+ verified: false
61
+ - type: pass@1
62
+ value: 0.3316
63
+ name: pass@1
64
+ verified: false
65
+ - type: pass@1
66
+ value: 0.3611
67
+ name: pass@1
68
+ verified: false
69
+ - type: pass@1
70
+ value: 0.571
71
+ name: pass@1
72
+ verified: false
73
+ - task:
74
+ type: text-generation
75
+ dataset:
76
+ name: HumanEval Infilling (Single-Line)
77
+ type: loubnabnl/humaneval_infilling
78
+ metrics:
79
+ - type: pass@1
80
+ value: 0.8045
81
+ name: pass@1
82
+ verified: false
83
+ - type: pass@1
84
+ value: 0.4819
85
+ name: pass@1
86
+ verified: false
87
+ - type: pass@1
88
+ value: 0.3768
89
+ name: pass@1
90
+ verified: false
91
+ ---
92
+
93
+ # cnfusion/Mellum-4b-sft-python-mlx-fp16
94
+
95
+ The Model [cnfusion/Mellum-4b-sft-python-mlx-fp16](https://huggingface.co/cnfusion/Mellum-4b-sft-python-mlx-fp16) was converted to MLX format from [JetBrains/Mellum-4b-sft-python](https://huggingface.co/JetBrains/Mellum-4b-sft-python) using mlx-lm version **0.22.3**.
96
+
97
+ ## Use with mlx
98
+
99
+ ```bash
100
+ pip install mlx-lm
101
+ ```
102
+
103
+ ```python
104
+ from mlx_lm import load, generate
105
+
106
+ model, tokenizer = load("cnfusion/Mellum-4b-sft-python-mlx-fp16")
107
+
108
+ prompt="hello"
109
+
110
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
111
+ messages = [{"role": "user", "content": prompt}]
112
+ prompt = tokenizer.apply_chat_template(
113
+ messages, tokenize=False, add_generation_prompt=True
114
+ )
115
+
116
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
117
+ ```