Update README.md
Browse files
README.md
CHANGED
@@ -243,7 +243,7 @@ In addition to the base model scores, we are providing scores for a Mellum fine-
|
|
243 |
- Languages: Python and Java
|
244 |
- Metric: Exact Match (EM), %
|
245 |
|
246 |
-
Since Mellum has a maximum context window of 8k, we report both the average performance across all evaluated context lengths (2k, 4k, 8k, 12k, and 16k) and the average over context lengths within its supported range (≤ 8k)
|
247 |
|
248 |
### Python Subset
|
249 |
| Model | 2k | 4k | 8k | 12k | 16k | Avg | Avg ≤ 8k |
|
|
|
243 |
- Languages: Python and Java
|
244 |
- Metric: Exact Match (EM), %
|
245 |
|
246 |
+
Since Mellum has a maximum context window of 8k, we report here both the average performance across all evaluated context lengths (2k, 4k, 8k, 12k, and 16k) and the average over context lengths within its supported range (≤ 8k).
|
247 |
|
248 |
### Python Subset
|
249 |
| Model | 2k | 4k | 8k | 12k | 16k | Avg | Avg ≤ 8k |
|