init readme.md
Browse files
README.md
CHANGED
@@ -1,3 +1,87 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# Model Description
|
6 |
+
Mellum-base-4B is the first open-source installation of LLMs for code-related tasks by JetBrains.
|
7 |
+
The model is trained specifically for code completion task on >3 trillion tokens with 8192 context window on N programming languages.
|
8 |
+
We employed LLaMA-like architecture in total with 4B parameters without using Grouped Query Attention, which makes it convenient for both efficient in inference in cloud (e.g. with vLLM) and fast local inference (e.g. with llama.cpp or Ollama).
|
9 |
+
Mellum was trained with AMP using bf16 precision, and the same bf16 version is uploaded to HuggingFace for public usage.
|
10 |
+
It is designed for professional developer tooling integration (e.g., intelligent suggestions in IDEs), AI code assistants, and research applications in code understanding and generation. Published model is a base model meaning that it does not excel in down-stream tasks, however it is fully suitable for SFT/RL fine-tuning.
|
11 |
+
|
12 |
+
# Training Data
|
13 |
+
- Total Training Tokens: 3 trillion tokens
|
14 |
+
- Corpus: StackV1, Starcoderdata, StackV2, CommitPack, English wiki
|
15 |
+
|
16 |
+
# Training Details
|
17 |
+
- Context Window: 8,192 tokens
|
18 |
+
- Optimization: Standard language modeling objective adapted for code completion and infilling.
|
19 |
+
- Hardware: Cluster of 256 x H200 NVIDIA GPUs with Infiniband
|
20 |
+
- Training Duration: ~20 days
|
21 |
+
|
22 |
+
# Benchmarks
|
23 |
+
In addition to the base model scores, we are providing scores for a Mellum fine-tuned for Python to provide model’s users with some feeling about potential capabilities.
|
24 |
+
|
25 |
+
## RepoBench
|
26 |
+
- Type: single-line
|
27 |
+
- Languages: Python and Java
|
28 |
+
- Metric: Exact Match (EM), %
|
29 |
+
|
30 |
+
Python Subset:
|
31 |
+
| Model | 2K Context | 4K Context | 8K Context |
|
32 |
+
|----------------------|------------|------------|------------|
|
33 |
+
| Mellum-4b-sft-python | 28.09% | 30.91% | 30.42% |
|
34 |
+
| Mellum-4b-base | 26.68% | 27.58% | 26.89% |
|
35 |
+
|
36 |
+
Java Subset:
|
37 |
+
| Model | 2K Context | 4K Context | 8K Context |
|
38 |
+
|---------------|------------|------------|------------|
|
39 |
+
| Mellum-4b-base | 33.15% | 33.48% | 27.79% |
|
40 |
+
|
41 |
+
## SAFIM Benchmark
|
42 |
+
- Type: mix of multi-line and single-line
|
43 |
+
- Languages: multi-language
|
44 |
+
- Metric: pass@1, %
|
45 |
+
|
46 |
+
| Model | Algorithmic | Control | API | Average |
|
47 |
+
|----------------------|-------------|---------|--------|---------|
|
48 |
+
| Mellum-4b-sft-python | 33.16% | 36.11% | 57.10% | 42.12% |
|
49 |
+
| Mellum-4b-base | 25.30% | 38.39% | 50.65% | 38.11% |
|
50 |
+
|
51 |
+
## HumanEval Infilling
|
52 |
+
- Type: single-line and multi-line
|
53 |
+
- Languages: Python
|
54 |
+
- Metric: pass@1, %
|
55 |
+
|
56 |
+
| Model | Single-line | Multi-line | Random Span |
|
57 |
+
|----------------------|-------------|------------|-------------|
|
58 |
+
| Mellum-4b-sft-python | 80.45% | 48.19% | 37.68% |
|
59 |
+
| Mellum-4b-base | 66.21% | 38.52% | 29.70% |
|
60 |
+
|
61 |
+
# Intended Use
|
62 |
+
- Integration into IDEs and code editors for powering code completion.
|
63 |
+
- Research into code generation, AI pair programming, and infilling techniques.
|
64 |
+
- Educational scenarios for code models fine-tuning.
|
65 |
+
|
66 |
+
# Limitations
|
67 |
+
- Biases: May reflect biases present in public codebases. For example it will likely produce code which is similar in style to the open-source repositories.
|
68 |
+
- Security: Code suggestions should not be assumed to be secure or free of vulnerabilities.
|
69 |
+
|
70 |
+
# Sample Usage
|
71 |
+
Here’s an example of how to run and sample from the model:
|
72 |
+
python
|
73 |
+
CopyEdit
|
74 |
+
*TODO: Insert sample code here*
|
75 |
+
|
76 |
+
# Citation
|
77 |
+
If you use this model, please cite:
|
78 |
+
bibtex
|
79 |
+
CopyEdit
|
80 |
+
@misc{jetbrains_code_completion_llm,
|
81 |
+
title={Mellum},
|
82 |
+
author={JetBrains},
|
83 |
+
year={2025},
|
84 |
+
}
|
85 |
+
|
86 |
+
# Contact
|
87 |
+
|