topshik commited on
Commit
e15742f
·
verified ·
1 Parent(s): 0f5ed11

update code instructions and intro

Browse files
Files changed (1) hide show
  1. README.md +75 -22
README.md CHANGED
@@ -3,12 +3,18 @@ license: apache-2.0
3
  ---
4
 
5
  # Model Description
6
- Mellum-base-4B is the first open-source installation of LLMs for code-related tasks by JetBrains.
7
- The model is trained specifically for code completion task on >3 trillion tokens with 8192 context window on N programming languages.
8
- We employed LLaMA-like architecture in total with 4B parameters without using Grouped Query Attention, which makes it convenient for both efficient in inference in cloud (e.g. with vLLM) and fast local inference (e.g. with llama.cpp or Ollama).
9
- Mellum was trained with AMP using bf16 precision, and the same bf16 version is uploaded to HuggingFace for public usage.
10
- It is designed for professional developer tooling integration (e.g., intelligent suggestions in IDEs), AI code assistants, and research applications in code understanding and generation.
11
- Published model is a base model meaning that it does not excel in down-stream tasks, however it is fully suitable for SFT/RL fine-tuning.
 
 
 
 
 
 
12
 
13
  # Training Data
14
  - Total Training Tokens: ~4.2 trillion tokens
@@ -16,12 +22,12 @@ Published model is a base model meaning that it does not excel in down-stream ta
16
 
17
  # Training Details
18
  - Context Window: 8,192 tokens
19
- - Optimization: Standard language modeling objective adapted for code completion and infilling.
20
  - Hardware: Cluster of 256 x H200 NVIDIA GPUs with Infiniband
21
  - Training Duration: ~20 days
22
 
23
  # Benchmarks
24
- In addition to the base model scores, we are providing scores for a Mellum fine-tuned for Python to provide model’s users with some feeling about potential capabilities.
25
 
26
  ## RepoBench
27
  - Type: single-line
@@ -59,30 +65,77 @@ Java Subset:
59
  | Mellum-4b-sft-python | 80.45% | 48.19% | 37.68% |
60
  | Mellum-4b-base | 66.21% | 38.52% | 29.70% |
61
 
62
- # Intended Use
63
- - Integration into IDEs and code editors for powering code completion.
64
- - Research into code generation, AI pair programming, and infilling techniques.
65
- - Educational scenarios for code models fine-tuning.
66
-
67
  # Limitations
68
  - Biases: May reflect biases present in public codebases. For example it will likely produce code which is similar in style to the open-source repositories.
69
  - Security: Code suggestions should not be assumed to be secure or free of vulnerabilities.
70
 
71
  # Sample Usage
72
  Here’s an example of how to run and sample from the model:
73
- python
74
- CopyEdit
75
- *TODO: Insert sample code here*
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
76
 
77
  # Citation
78
  If you use this model, please cite:
79
- bibtex
80
- CopyEdit
81
- @misc{jetbrains_code_completion_llm,
82
- title={Mellum},
83
- author={JetBrains},
84
  year={2025},
85
  }
 
86
 
87
  # Contact
88
-
 
3
  ---
4
 
5
  # Model Description
6
+ Mellum-base-4B is JetBrains' first open-source large language model (LLM) optimized for code-related tasks.
7
+
8
+ Trained on over 4 trillion tokens with a context window of 8192 tokens across multiple programming languages, Mellum-base-4B is tailored specifically for code completion.
9
+ The model follows a LLaMA-style architecture with 4 billion parameters and does not use Grouped Query Attention (GQA), making it efficient for both cloud inference (e.g., via vLLM) and local deployment (e.g., using llama.cpp or Ollama).
10
+
11
+ Mellum was trained using Automatic Mixed Precision (AMP) with bf16 precision.
12
+ The uploaded version on Hugging Face retains the bf16 format for public use.
13
+
14
+ Designed for integration into professional developer tooling (e.g., intelligent code suggestions in IDEs), AI-powered coding assistants, and research on code understanding and generation, Mellum is also well-suited for educational applications and fine-tuning experiments.
15
+
16
+ This release includes a base model, and some SFT models as well.
17
+ Keep in mind that base model is not fine-tuned for downstream tasks out-of-the-box, however, it is fully capable of supporting supervised fine-tuning (SFT) and reinforcement learning (RL) for adaptation to specific applications.
18
 
19
  # Training Data
20
  - Total Training Tokens: ~4.2 trillion tokens
 
22
 
23
  # Training Details
24
  - Context Window: 8,192 tokens
25
+ - Optimization: Standard language modeling objective.
26
  - Hardware: Cluster of 256 x H200 NVIDIA GPUs with Infiniband
27
  - Training Duration: ~20 days
28
 
29
  # Benchmarks
30
+ In addition to the base model scores, we are providing scores for a Mellum fine-tuned for Python to provide model’s users with some estimation about potential capabilities.
31
 
32
  ## RepoBench
33
  - Type: single-line
 
65
  | Mellum-4b-sft-python | 80.45% | 48.19% | 37.68% |
66
  | Mellum-4b-base | 66.21% | 38.52% | 29.70% |
67
 
 
 
 
 
 
68
  # Limitations
69
  - Biases: May reflect biases present in public codebases. For example it will likely produce code which is similar in style to the open-source repositories.
70
  - Security: Code suggestions should not be assumed to be secure or free of vulnerabilities.
71
 
72
  # Sample Usage
73
  Here’s an example of how to run and sample from the model:
74
+
75
+ ```python
76
+ import json
77
+ from transformers import AutoTokenizer, AutoModelForCausalLM
78
+
79
+ example = """
80
+ import sys
81
+ import os
82
+ import time
83
+
84
+ sys.path.append(os.getcwd())
85
+
86
+ from cluster.prepare_data import get_headers_pairs_list, write_dist_matrix
87
+ from cluster.token_edit_distance import get_distance_matrix
88
+
89
+ if len(sys.argv) < 3:
90
+ print(
91
+ "Too few arguments. You should provide: \n1. dataset_filename" +
92
+ "\n2. output_data_filename"
93
+ )
94
+ sys.exit()
95
+
96
+ start = time.perf_counter()
97
+ dataset_filename_ = sys.argv[1]
98
+ output_data_filename_ = sys.argv[2]
99
+
100
+ headers_pairs = get_headers_pairs_list(dataset_filename_, verbose=True)
101
+
102
+ dist_matrix, max_dist = get_distance_matrix(
103
+ list(map(lambda x: x[1], headers_pairs)),
104
+ verbose=True
105
+ )
106
+
107
+ write_dist_matrix(dist_matrix, max_dist, output_data_filename_, verbose=True)
108
+
109
+ end = time.perf_counter()
110
+ """
111
+
112
+ tokenizer = AutoTokenizer.from_pretrained('mellum-base-4b')
113
+ model = AutoModelForCausalLM.from_pretrained('mellum-base-4b')
114
+ encoded_input = tokenizer(example, return_tensors='pt', return_token_type_ids=False)
115
+ input_len = len(encoded_input["input_ids"][0])
116
+ out = model.generate(
117
+ **encoded_input,
118
+ max_new_tokens=100,
119
+ num_beams=1,
120
+ pad_token_id=tokenizer.eos_token_id,
121
+ eos_token_id=tokenizer.eos_token_id,
122
+ )
123
+ print("### Context")
124
+ print(tokenizer.decode(out[0][:input_len]))
125
+ print("### Prediction")
126
+ print(tokenizer.decode(out[0][input_len:]))
127
+ ```
128
 
129
  # Citation
130
  If you use this model, please cite:
131
+
132
+ ```bibtex
133
+ @misc{mellum-base-4b,
134
+ title={Mellum base 4B},
135
+ author={Nikita Pavlichenko, Iurii Nazarov, Ivan Dolgov, Julia Reshetnikova, Ekaterina Garanina, Karol Lasocki, Sergei Boitsov, Dariia Karaeva, Ivan Bondyrev, Maksim Sheptyakov, Dmitry Ustalov, Nikita Abramov, Olga Kolomyttseva, Kseniia Lysaniuk, Ilia Zavidnyi, Anton Semenkin, Uladzislau Sazanovich},
136
  year={2025},
137
  }
138
+ ```
139
 
140
  # Contact
141
+ For questions, collaborations and requests reach us out via [email protected]