--- license: apache-2.0 datasets: - bigcode/the-stack - bigcode/the-stack-v2 - bigcode/starcoderdata - bigcode/commitpack library_name: mlx tags: - code - mlx base_model: JetBrains/Mellum-4b-base pipeline_tag: text-generation model-index: - name: Mellum-4b-base results: - task: type: text-generation dataset: name: RepoBench 1.1 (Python) type: tianyang/repobench_python_v1.1 metrics: - type: exact_match value: 0.2591 name: EM verified: false - type: exact_match value: 0.2797 name: EM ≤ 8k verified: false - type: exact_match value: 0.282 name: EM verified: false - type: exact_match value: 0.2795 name: EM verified: false - type: exact_match value: 0.2777 name: EM verified: false - type: exact_match value: 0.2453 name: EM verified: false - type: exact_match value: 0.211 name: EM verified: false - task: type: text-generation dataset: name: RepoBench 1.1 (Java) type: tianyang/repobench_java_v1.1 metrics: - type: exact_match value: 0.2858 name: EM verified: false - type: exact_match value: 0.3108 name: EM ≤ 8k verified: false - type: exact_match value: 0.3202 name: EM verified: false - type: exact_match value: 0.3212 name: EM verified: false - type: exact_match value: 0.291 name: EM verified: false - type: exact_match value: 0.2492 name: EM verified: false - type: exact_match value: 0.2474 name: EM verified: false - task: type: text-generation dataset: name: SAFIM type: gonglinyuan/safim metrics: - type: pass@1 value: 0.3811 name: pass@1 verified: false - type: pass@1 value: 0.253 name: pass@1 verified: false - type: pass@1 value: 0.3839 name: pass@1 verified: false - type: pass@1 value: 0.5065 name: pass@1 verified: false - task: type: text-generation dataset: name: HumanEval Infilling (Single-Line) type: loubnabnl/humaneval_infilling metrics: - type: pass@1 value: 0.6621 name: pass@1 verified: false - type: pass@1 value: 0.3852 name: pass@1 verified: false - type: pass@1 value: 0.2969 name: pass@1 verified: false --- # mlx-community/Mellum-4b-base This model [mlx-community/Mellum-4b-base](https://huggingface.co/mlx-community/Mellum-4b-base) was converted to MLX format from [JetBrains/Mellum-4b-base](https://huggingface.co/JetBrains/Mellum-4b-base) using mlx-lm version **0.25.2**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/Mellum-4b-base") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```