File size: 3,225 Bytes
c0719eb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
---
license: apache-2.0
datasets:
- bigcode/the-stack
- bigcode/the-stack-v2
- bigcode/starcoderdata
- bigcode/commitpack
library_name: mlx
tags:
- code
- mlx
base_model: JetBrains/Mellum-4b-base
pipeline_tag: text-generation
model-index:
- name: Mellum-4b-base
results:
- task:
type: text-generation
dataset:
name: RepoBench 1.1 (Python)
type: tianyang/repobench_python_v1.1
metrics:
- type: exact_match
value: 0.2591
name: EM
verified: false
- type: exact_match
value: 0.2797
name: EM ≤ 8k
verified: false
- type: exact_match
value: 0.282
name: EM
verified: false
- type: exact_match
value: 0.2795
name: EM
verified: false
- type: exact_match
value: 0.2777
name: EM
verified: false
- type: exact_match
value: 0.2453
name: EM
verified: false
- type: exact_match
value: 0.211
name: EM
verified: false
- task:
type: text-generation
dataset:
name: RepoBench 1.1 (Java)
type: tianyang/repobench_java_v1.1
metrics:
- type: exact_match
value: 0.2858
name: EM
verified: false
- type: exact_match
value: 0.3108
name: EM ≤ 8k
verified: false
- type: exact_match
value: 0.3202
name: EM
verified: false
- type: exact_match
value: 0.3212
name: EM
verified: false
- type: exact_match
value: 0.291
name: EM
verified: false
- type: exact_match
value: 0.2492
name: EM
verified: false
- type: exact_match
value: 0.2474
name: EM
verified: false
- task:
type: text-generation
dataset:
name: SAFIM
type: gonglinyuan/safim
metrics:
- type: pass@1
value: 0.3811
name: pass@1
verified: false
- type: pass@1
value: 0.253
name: pass@1
verified: false
- type: pass@1
value: 0.3839
name: pass@1
verified: false
- type: pass@1
value: 0.5065
name: pass@1
verified: false
- task:
type: text-generation
dataset:
name: HumanEval Infilling (Single-Line)
type: loubnabnl/humaneval_infilling
metrics:
- type: pass@1
value: 0.6621
name: pass@1
verified: false
- type: pass@1
value: 0.3852
name: pass@1
verified: false
- type: pass@1
value: 0.2969
name: pass@1
verified: false
---
# mlx-community/Mellum-4b-base
This model [mlx-community/Mellum-4b-base](https://huggingface.co/mlx-community/Mellum-4b-base) was
converted to MLX format from [JetBrains/Mellum-4b-base](https://huggingface.co/JetBrains/Mellum-4b-base)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mellum-4b-base")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|