isv2lf commited on
Commit
bce66de
·
verified ·
1 Parent(s): de9e6cc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +141 -0
README.md ADDED
@@ -0,0 +1,141 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ library_name: transformers
6
+ tags:
7
+ - chat
8
+ - mlx
9
+ - mlx-my-repo
10
+ license_name: mrl
11
+ pipeline_tag: text-generation
12
+ datasets:
13
+ - anthracite-org/c2_logs_32k_mistral-v3_v1.2_no_system
14
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
15
+ - anthracite-org/kalo-opus-instruct-3k-filtered-no-system
16
+ - anthracite-org/nopm_claude_writing_fixed
17
+ - anthracite-org/kalo_opus_misc_240827_no_system
18
+ - anthracite-org/kalo_misc_part2_no_system
19
+ base_model: anthracite-org/magnum-v4-22b
20
+ model-index:
21
+ - name: magnum-v4-22b
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: IFEval (0-Shot)
28
+ type: HuggingFaceH4/ifeval
29
+ args:
30
+ num_few_shot: 0
31
+ metrics:
32
+ - type: inst_level_strict_acc and prompt_level_strict_acc
33
+ value: 56.29
34
+ name: strict accuracy
35
+ source:
36
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
37
+ name: Open LLM Leaderboard
38
+ - task:
39
+ type: text-generation
40
+ name: Text Generation
41
+ dataset:
42
+ name: BBH (3-Shot)
43
+ type: BBH
44
+ args:
45
+ num_few_shot: 3
46
+ metrics:
47
+ - type: acc_norm
48
+ value: 35.55
49
+ name: normalized accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
52
+ name: Open LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: MATH Lvl 5 (4-Shot)
58
+ type: hendrycks/competition_math
59
+ args:
60
+ num_few_shot: 4
61
+ metrics:
62
+ - type: exact_match
63
+ value: 17.6
64
+ name: exact match
65
+ source:
66
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: GPQA (0-shot)
73
+ type: Idavidrein/gpqa
74
+ args:
75
+ num_few_shot: 0
76
+ metrics:
77
+ - type: acc_norm
78
+ value: 10.4
79
+ name: acc_norm
80
+ source:
81
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
82
+ name: Open LLM Leaderboard
83
+ - task:
84
+ type: text-generation
85
+ name: Text Generation
86
+ dataset:
87
+ name: MuSR (0-shot)
88
+ type: TAUR-Lab/MuSR
89
+ args:
90
+ num_few_shot: 0
91
+ metrics:
92
+ - type: acc_norm
93
+ value: 13.43
94
+ name: acc_norm
95
+ source:
96
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
97
+ name: Open LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: MMLU-PRO (5-shot)
103
+ type: TIGER-Lab/MMLU-Pro
104
+ config: main
105
+ split: test
106
+ args:
107
+ num_few_shot: 5
108
+ metrics:
109
+ - type: acc
110
+ value: 31.44
111
+ name: accuracy
112
+ source:
113
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=anthracite-org/magnum-v4-22b
114
+ name: Open LLM Leaderboard
115
+ ---
116
+
117
+ # isv2lf/magnum-v4-22b-Q4-mlx
118
+
119
+ The Model [isv2lf/magnum-v4-22b-Q4-mlx](https://huggingface.co/isv2lf/magnum-v4-22b-Q4-mlx) was converted to MLX format from [anthracite-org/magnum-v4-22b](https://huggingface.co/anthracite-org/magnum-v4-22b) using mlx-lm version **0.20.5**.
120
+
121
+ ## Use with mlx
122
+
123
+ ```bash
124
+ pip install mlx-lm
125
+ ```
126
+
127
+ ```python
128
+ from mlx_lm import load, generate
129
+
130
+ model, tokenizer = load("isv2lf/magnum-v4-22b-Q4-mlx")
131
+
132
+ prompt="hello"
133
+
134
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
135
+ messages = [{"role": "user", "content": prompt}]
136
+ prompt = tokenizer.apply_chat_template(
137
+ messages, tokenize=False, add_generation_prompt=True
138
+ )
139
+
140
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
141
+ ```